Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are we overcomplicating software development?
639 points by ian0 on Jan 18, 2017 | hide | past | favorite | 368 comments
I have recently been involved in the overhaul of an established business with poor output into a functioning early/mid stage startup (long story). We are back on track but, honestly, my lessons learned fly in the face of a lot of currently accepted wisdom:

1) Choose languages that developers are familiar with, not the best tool for the job

2) Avoid microservices where possible, the operational cost considering devops is just immense

3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

I think overall we seem to be over-complicating software development. We look to architecture and process for flexibility when in reality its acting as a crutch for lack of communication and proper analysis of how we should be architecting the actual software.

Is it just me?




Many of these practices are popularized by Google/Facebook/Amazon but don't make sense for a company with 100 or even 1,000 people. I try to focus on whether a practice will solve a concrete problem we're facing.

Switching from Hadoop to Spark was clearly a good idea for our team, even though it required learning a new stack, but there isn't a strong reason to switch to Flink or start using Haskell.

Agile makes sense when your main risk is fine-grained details of user requirements, but not when you have other substantial risks, such as making sure a statistical algorithm is accurate enough.

Microservices probably reduces the asymptotic cost of scaling but add a huge constant factor.

Relational databases are the right choice 95% of the time, non-relational stores require a really specific use case.

TDD is good for fast feedback in some domains, but for others, manually investigating the output or putting your logic into types is better. E.g. a lot of my time comes from scaling jobs that work on 10gb of data but crash on 1tb, TDD is not that helpful here.

Continuous integration mostly makes sense when you're making a lot of small changes and can reliably expect a test suite to catch issues.

In short, ask the question "when is practice X useful?" instead of "is practice X a good idea?"


> Microservices probably reduces the asymptotic cost of scaling but add a huge constant factor.

If this were Medium, I'd highlight the hell out of that.

That's so true, and so nicely, succinctly put - it ought to be the reply to end every argument about whether microservices are good or bad.


At the last company I was at, our search microservice was fast (average response was well under 100ms) and it didn't crash once while I was there. At a larger company, this may not be an accomplishment. At a startup, this is the bees knees.

Meanwhile, the rest of our codebase (a monolith) crashed every few days for one reason or another. We had an on-call rotation not because that's what you're supposed to do, but because we actually needed it.

Now I'm not saying that microservices make sense for everyone. In general, I agree that they are used incorrectly. Microservices are hot and software developers, generally speaking, like to use hot technologies. Yes, moving to a microservice was costly. We had to re-write a lot of code, we had to set up our own servers, and we had to get permission from the guardians that be to do all of this. But, for our use case, and I assume there are other use cases too, the benefits of detaching ourselves from the company's monolithic codebase far outweighed the costs for doing so.

TL;DR No argument is the end to every conversation. Few things are so black and white.


I tend to start with a monolithic service.

Sooner or later you get a feel for which bits are becoming at least API stable and could run independently. That's when I split them out.

Do it too soon and you end up choosing the wrong boundaries and tying yourself up in knots, do it too late and your monolith can become a mess that's difficult to detach the pieces of.


I tend to try to write monolithic services in such a way that they could be broken up into microservices if that were ever desired.

I don't go too far with this, just avoid things like shared static state and other anti-patterns.


You mean you follow good software development practices? Heresy!


Another option is to start with an umbrella app (Erlang/Elixir/OTP). It can run like a monolithic or ... nano-services (I suppose) within the same monolith. When it is time to split them out, it is easier.

It does assume that you either start with devs familiar with OTP or you have generalist devs that can pick things up quickly.


True. There's another thread in here somewhere talking about premature generalization. I think that's what you're getting at with "Do it too soon and you end up choosing the wrong boundaries".


TBH microservices do a good job of making you much more dependent on your tools, and selecting the wrong tool for the job won't become clear until you've used that tool for years.


At the last place I was at. We had a micro serviced monolith. I can't even begin to describe that thing in common engineering terms. (note: it's better than it seems).


In case you're wondering about the downvotes, a micro services monolith sounds like an oxymoron.

Could you expand on how the architecture actually looked? What made it a monolith and what made it micro serviced?


Maybe they were referring to a distributed monolith?


I believe, for small shops, the real benefit of microservices is the logic split that forces good design and reduces cognitive load.

You reap the scaling benefits way later, if ever.


You can get that benefit by dividing your system up into libraries with defined, documented, tested APIs. There's no need to introduce all the complexity and failure modes of distributed systems just to force good design.

When you need to scale, then you can easily throw your libraries behind an RPC framework and call it microservices, but there's no need to pay that cost until you actually face that problem.


Just putting libraries that were never designed to scale up behind RPC usually won't help you scale. These libraries tend to work with mutable, stateful objects and don't have any groundwork in place for partitioning.

That doesn't mean you can't scale up from a monolith (even one without clean interfaces) - every startup growth story is a testament otherwise - but it's never as as easy as strapping an RPC layer over your library.


One caveat is that if you need to fix a bug in your library in an API-compatible way, you can't reach into all the codebases that are using your library. You can deploy a new version of the microservice, though.


I mean, you _can_ if you organize your code such that you can. For example, Google's monorepo lets maintainers of a library find all internal usages and fix them. This is one of the benefits Dan Luu notes in http://danluu.com/monorepo/.


I think he means that you can't force all teams that use your library to recompile and pickup the updated code, while if you deploy it as a service, you recompile and redeploy and everyone talking to your service gets the most up-to-date version.

This is a real problem - I recall that Sanjay Ghemawat et al was working on it when I left Google, though I dunno if the solution they came up with is public yet. It's unlikely to seriously affect you unless you're Google-scale, though, by which time you've probably divided everything up into services and aren't taking advice from the Internet anyway. For companies that are a few teams working on a single product, it's easy enough to send a company-wide e-mail saying "Rebuild & redeploy anything that depends upon library X", and if you're doing continuous deployment or deploy only as a single artifact, the problem never affects you anyway.


You were initially replying to a suggestion explicitly qualifying this with "for small shops". Yes, you most definitely can force all teams that use your library to recompile and pickup the updated code - and it doesn't mean "a company wide email", a realistic scenario would involve standing up, pointing to a specific person and saying "Bob, the new version of my library will also work better for the performance problems you had, pick it up whenever you're ready"; and knowing that it's an exhaustive list of people who need to be informed.

For starters the vast majority of code is developed in-house in non-software companies. The vast number of products are a single team working essentially in a silo, not "a few teams working on a single product".

When people are talking about small companies, it's misleading to think "smaller than Google". Smaller-than-Google is still an enormous quantity of development. Enterprisy practices make much sense in scaling software in companies that are smaller-than-smaller-than-Google. if you hear "small company", think multiple steps further from that, a smaller-than-smaller-than-smaller-than-smaller-than-smaller-than-Google company.


> you recompile and redeploy and everyone talking to your service gets the most up-to-date version

Sure, but if you do that in place it will still break stuff that assumes it works like the last version, and if you do a versioned API or the like you still can't force all teams to adopt the new version.


> I think he means that you can't force all teams that use your library to recompile and pickup the updated code

Does your CI system not automatically build dependent artifacts--

> It's unlikely to seriously affect you unless you're Google-scale, though

--okay, whew. ;)


If you need to change your microservice's API in a non-backwards compatible way, you have the exact same problem plus significant operational complexity.


Don't you just create a new one and let the old go obsolete when the "users" switch?


Which is basically what you do for a traditional library as well. Tweak the header so anything being recompiled against it gets a different function signature. Then old apps continue to work, and newly built apps get the fix.


Moderately ironically - this is a place where dynamically loaded libraries are particularly well suited. So long as the API hasn't changed, the library can be patched independently of all the other compiled code.

Of course, there are other limitations this imposes, but it does make it very simple to deploy a new library to all code which uses it.


> you can't reach into all the codebases that are using your library

You can deploy a new version of the dll and applications can pick it up when they restart. Linux will apply security patches this way.


Better, you can do it without a restart is you can serialise current state. That also enforces discipline in defining such state.

Microservices are only a step ahead.

That said, in many cases the cost of a full restart can be accepted.


Nothing about splitting your app into microservices _forces_ a good design. I've never seen microservices with well-defined seams. Every time, knowledge "leaked" between the apps, and any non-trivial change to the app required updating multiple repos, deployment synchronization, etc. Microservices are a tremendous burden that the vast majority of companies will not benefit from.


I did not mean microservice as in "just make it many apps!". I meant as do not share databases and expose everything as APIs.

It helps cognitive load because such apps can be reasoned about without reading code elsewhere.


API is not enough, full contract has to be shared.


> forces good design and reduces cognitive load

Except splitting into microservices is an unnecessarily complex design choice. That's almost always worse, and the cognitive load comes in when you now need to figure out how to get this stuff right. The scaling benefits also require that you get it right, small flaws in your system become massive issues.


"Is your bicycle too slow? Get a helicopter!"


If you separate components wrong in the same code base is an easy fix. If you get them wrong between services you have s much larger problem. I'm not sure why you'd be more likely to get that right with services than within the same code base.


"Logic" is vague and there a several layers you can implement this before even thinking about microservices.

It can be as simple as a simple class, or maybe a larger class as a single-file service, or an entire namespace with a several classes, or a separate library easily referenced. All the "logic" split benefits without the ridiculous hassle of microservices.


I think this is actually a failure in mainstream programming languages, which make it far too easy to reach across what's meant to be a defined subsystem boundary and meddle where you shouldn't.


They also have to weak tools to automate enforcing contracts. Generally the only available tool is "assert".


Definitely agree – the polyglot aspect can also be useful for companies where different parts of their problem fit different tools.

However, exercising proper software discipline and using languages with good/existent module systems, like OCaml or Go, can lead to the same modular results without the fixed overhead. If you don't have a full-time ops person or team, you almost always have no business running microservices.


> ask the question "when is practice X useful?" instead of "is practice X a good idea?"

This too!


It applies to TDD as well, unfortunately a lot of TDD proponents don't really acknowledge that.


> Relational databases are the right choice 95% of the time, non-relational stores require a really specific use case.

Relational databases are great, but I spent large parts of my life as a developer writing layers converting to/from SQL and later ORMs. There's a huge gain in just not translating data. I know Postgres (and others) deal with JSON, but I can't escape the feeling it's a bit shoe horned in there – basic SQL statements have strange new operators like ->> -> #>>.

Relational databases are great for, well, relational data with strong consistency requirements. The popularity of the original MyISAM tables without integrity checks baffled me at the time. Why spend time marshalling data in/out of table form when you don't gain the benefits of a RDBM?

Not doing data translation saves _a lot_ of time. Plain key-value stores are amazing, document stores like Elasticsearch are great, ultimately the choice comes down to requirements and time saving is often a very heavy argument, especially for small companies/startups.


Things such as joins, transactions, and means of enforcing data integrity are useful when solving a whole slew of problems. Not to mention the tooling and community you benefit from when you use a common RDBMS.

I never found data translation/serialization to be a big pain (just rely on a framework/lib that does it for you). It's a bigger pain to hand-roll joins that would be a one-liner in SQL or deal with issues that arise from having your data (unnecessarily) reside in many systems.


I hear this data-integrity thing a lot but I don't run into these problems myself. I think it might be a functional-programming thing. It's much easier and safer to declare your constraints rather than trying to enforce them. If you aren't in a functional language I can see why you'd want to reach out to one but SQL in a separate process is just one of the options.

> Things such as joins, transactions, and means of enforcing data integrity are useful

If the domain needs transactions I'm already speccing them regardless of general usefulness. Yes all that stuff is useful, but all abstractions have a cost which isn't free just because it's hidden in the DB.

For a problem that didn't need transactions, but for which they were useful, why would you automatically want to couple the solution with your storage layer? If you're looking for the ability to express business logic clearly without cluttering it with error handling, for instance, software transaction memory would probably be a better level to work at.


> If the domain needs transactions [...]

Considering we speak about daemon software (services exposing some API to readers and writers) that provides CRUD behavior to the user (end-user or other developer), isn't that nearly always the case to guarantee write access to concurrent writers without the risk of crippling your data?

Furthermore I am not sure how this relates to FP.

> It's much easier and safer to declare your constraints rather than trying to enforce them.

But that is a strong point of RDBMs implementing SQL. You have some kind of schema (think type) and use selected functions (select, update, delete, create, etc.) to transform the data.


> > It's much easier and safer to declare your constraints rather than trying to enforce them.

> Furthermore I am not sure how this relates to FP.

I'm saying that FP is great for ensuring correctness.

> But that is a strong point of RDBMs implementing SQL.

Right. And if I didn't have other functional languages available that might be a bigger issue.


STM is awesome but a single node solution. Distributed transactions are a hard problem.


I understand, but you can't just throw a DB at it and walk away. For instance, which DB? Setup how? Running on what type of hosts? What topological requirements does this have? How much does of a multiplier does it place on your data load.

I'd definitely use a trusted DB for storing bank accounts. The consistency is pretty much the first requirement and the data maps perfectly to tables. And 7B checking accounts isn't that big, compared to some problems so it'd probably scale pretty well even worst-case.

But I probably wouldn't for an MMO. Or at least, it wouldn't be where I stored every little thing going on around them - just the events (xp and gold earned) that they'd freak out about if we lost. But even just a log-structured DB would work well for that.

If there's no contention for a resource (in the bank case - the value in the account) there's much less reason for a transaction. I want the system to make its best effort but I don't want to wait around for the message if there isn't anything I can do on a failure anyways.


Are there domains that don't need transactions?


Exactly.

I start with text files and for most purposes I do not bother with anything else.

Next is a key/value store. Simple.

Relational databases carry large overhead in translating data (as above) and also in design and maintenance of the structure and getting data into and out of them. I spent many years with them, like them a lot, but they are too much complication for most purposes.

Even with relational data RDBMS are only good if you are not certain of how you will be accessing the data. In most cases you are sure.

I am constantly stunned how people reach straight for MySQL or Postgres when flat text files with grep would work just as well and be much quicker to implement


I'm stunned that you're stunned that people generally don't use text files as data stores.


How do you deal with concurrent write access, do you lock the file?


He probably uses flock(2)


But isn't that hard to get right? At least if you using something like sqlite you get consistency guarantees.

Consider your process writing to the file and dying during write() - do you recover and repair the file after you reschedule?


Ramdisk maybe?


seriously curious -- what do you do when someone or some other "function" wants to query your data, wants to update it, etc.?


You can implement key/value stores in RDMS's too. It only take a few minutes to create a key/value table in most databases, combined with a few minutes in your favorite language to map it to an appropriate get/set routine. I find this particularly useful for variable attributes against another table, especially when its really a "foreign index, key, value" table. That way its still possible to join the values to other parts of the database. This paradigm really lends itself to multiple FK/key/value tables, where each one extends another particular table.

All that said, doing this requires careful thought, and DB normalization when its discovered that there is a 1:1 relationship between rows in a table and a particular key/value table. So, its not something that should be taken to extreme, but I find it aids in quick development, as every time you discover you need to store another piece of data for some edge condition it doesn't require lots of DB normalization. Also, I wouldn't really consider making the "value" field a blob, rather a very limited int or string.


TDD is not about the tests, it is about teaching yourself to code better. About forcing yourself to break up your code into small components with well defined interfaces. Something we all want to do but which is hard in practice without a tool to guide you. TDD is that tool.


In our case, we did that using functional programming & type-driven design (which ironically is also a TDD.)


This this this!

Most often I see TDD used as a couch for a really bad type system.


Types are just formalised limited contracts. Emphasis on limited. They are not enough.


T(est)DD does tend to make your code look more like you used a functional-oriented language. A lot less mutability, heavier use of first-class functions, etc. Though I tend to prefer languages that are at least somewhat functional (eg. python, go).


> Though I tend to prefer languages that are at least somewhat functional (eg. python, go).

Can't think of any less functional languages than these.


Think of the language features you need to be able to program in a functional way, then think about what these languages have. There is a sizable overlap.

If you need a template then think of scheme. It is the best example of a minimal functional language.


It doesn't even help people enough apply SOLID principles.

Pure functions help but the principles of generalisation are deeper than this.

Tests verify contracts offline. This will miss real issues.


"putting your logic into types is better"

Can you elaborate on that one? Sounds interesting to me.


In functional, statically typed programmings languages, there is a pattern where business logic, including "actions", is encoded in types. This article [1] gives an example of filesystem manipulations that are encoded in a "FreeF" type.

When business rules are encoded in datatypes, it's easy to check that the encoding and the transformations are complete, and the logic can easily be mock-tested.

[1] http://degoes.net/articles/modern-fp


Some rules are extremely hard to encode as types or the result is extremely awkward. Or worse, performance suffers due to encoding.

If it feels like translating into a foreign language, then it likely is that exotic or you are using a wrong language.


Suppose you have some business logic that subtracts the cost of a transaction from an account balance and returns a new account balance. These things are probably integers, but in many languages you don't have to specify that. You write this function, then later your coworker comes across it and passes it a double. You might end up with weird small discrepancies in account balances (or mysterious errors that only happen sometimes) that could be totally prevented at the time your colleague wrote the code via static analysis, if you use put some logic (costs and balances are integers) into the types.

This can be more sophisticated, like "this function requires a sorted list" so lets make a sorted list type, or packaging things up into biz logic types (a cost type that contains an integer instead of just using integers), but you can catch a wide variety of errors with static analysis if you make your code and logic amenable.


Recently our team built a data pipeline: a few large inputs, a few large outputs, a lot of processing in between, a lot of parallelization & working w/large datasets needed. Essentially you could view the entire process as writing one very complex function.

We approached this first outlining the procedure and specifying the types involved, then outlining functions from each type to the next. You could essentially think of our types as tables, so our outline was f1: t1: -> t2, f2: t2 -> t3,...,fn: tn -> t_output (although not quite that linear.)

This let us split up work on the different functions across a team of 6 people. Specifying the input and output types was basically enough to make sure the functions were correct most of the time, and baking the interfaces into types enforced by the compiler made it easy to refactor & coordinate on changes when necessary. Feedback when we made an error was generally immediately available, because IntelliJ would highlight the function that produced an output value of the wrong type, or the compiler would catch it.

In contrast, if we had relied primarily on unit tests to check the functions, that would have made coordination more difficult, refactoring harder, and would have required us to either generate or acquire test data to feed through each function. But this architecture let us successfully build out most of the logic even while we had no access to real data & a different team was working on data ingestion.


This is interesting, do you mind being more specific - what was the data, how big was it, how long did the functions run?

Assuming you are talking about real types and having something like

  f1 :: t1 -> t2
and

  f2 :: t2 -> t3
you suggest you were able to do

  g :: t1 -> t3
  g = f2 . f1
which works perfectly well, but is sometimes nontrivial to do for more complex functions, in particular if they are not pure (e.g. they do IO as data is too big for memory) and you do some logging and house-keeping in-between and because of runtime behavior that might be hard to predict.

Does f1 consume all input before f2 can run? Is it "streamed-through", like in `sh`, e.g.

  $ find /home/foobar | grep hs$ | xargs wc -l
which is often done as an optimization?

I really like the concept and it works great, but for me it is simpler to apply to smaller constructs and I am still investigating how to apply it to more "business-logic".


Almost all the functions are pure–the only impure functions we use read from or write to a data store. We use Apache Spark, which lets you write pure functions that can operate on data too large to handle on a single box, and it overall works quite well. Eg when designing started this project we wrote something like:

g = f5 . f4 . f3 . f2 . f1

where f1 reads from s3 and f5 writes to a db. Then the implementation work mostly involved breaking these down further, e.g. f1 = h4 . h3 . h2 . h1, where only h1 is stateful and everything else is pure.

Spark is lazily evaluated, and in practice it will stream through many operations–f1 will generally not be done consuming the input by the time f2 starts, although sometimes we force it to for debugging purposes.

Lazy evaluation and the discarding of side effects make logging difficult, which is one of the downsides. There are various monitoring and debugging tools that help but it's still definitely harder than the single machine case.


One thing that bothers me is the 'relational databases are good enough' statement, that is repeated in other contexts as well.

But especially here, where we're talking about reducing complexity, it feels off to me. PostgreSQL and MySQL seem to me like incredibly complex packages. SQL, the language, is not easy to master either; most programmers I meet know mostly basics. On top of that, there's a long ongoing history of security malpractice.

When talking about reducing complexity, CouchDB and Redis are far easier alternatives, in my humble opinion, though they go slightly against 'use the tools developers know'.


The implementation of PostgreSQL is complex, no doubt about that. But if you need strong data consistency and durability guarantees, it provides a rock-solid foundation.

SQL might take some getting used, but it is also not rocket science. It shouldn't take more than a week's study to master the basics. There is of course a lot of awful SQL code out there, exactly because most programmers don't even know the basics. You can do incredibly powerful things in it that would take 10x the code in an OO/procedural language. In my opinion dumping an ORM on top is also not the best way to leverage the strengths of an RDBM.

It is slightly ironic that you bring up security malpractice in the context of PostgreSQL, when in the next sentence you advocate Redis as a far easier alternative. As was recently in the news the Redis defaults were for a long time insecure (google for Fairware ransomware).


> In my opinion dumping an ORM on top is also not the best way to leverage the strengths of an RDBM.

I agree. Unfortunately, the way I see people usually using them is pretty bad - you should not let ORM-generated stuff dictate your business model. Database is a database. A storage layer. Business objects will not map 1:1 to ORM objects. Approaches like "let's inherit from ORM class and add business-related methods", in my experience, lead to total disaster. One has to respect the boundary between storage layer and business model layer.


I'm aware of the different approaches (mostly from reading Fowler's PoEAA), but currently we use an Active Record-style ORM with a few extra features (like Class Table Inheritance) and we haven't found any major issues with this approach. What was the worst case scenario you experienced with the 1:1 approach?


Over the past 5 years I've been in two projects using 1:1 ORM Active Record == business model base approach. One completely failed in part because of this, second is barely manageable, but I managed to save it by moving business code mostly to the outside of Active Record classes.

The problem I encountered in those projects is the mismatch between storage mental model and business mental model, which lead to explosion of crappy code (AKA technical debt). In particular:

1. the classes I need for business model may have initially mapped well to database tables, but over time they stop; business logic and model changes much faster than you'd like your DB schema to

2. since many things in AR can fire SQL queries, you have to keep in mind the workings of your database when doing almost every operation on your model; it's an abstraction leak

3. code shooting off SQL queries is randomly called from all over your codebase; it's harder to keep track of it and, if needed, optimize those queries

I like AR as a convenient API to get data from/to database, but given the point 1., I eventually learned to isolate AR layer as something below business model layer, so that the pattern is that business model is explicitly serialized and deserialized from database, instead of the database being coupled with the logic of your program.

Now I vaguely recall complaining about this before on HN and getting my ass handed back to me by someone who pointed out that these are all ORM n00b mistakes. I wish I could find that comment (pretty sure I noted the link down somewhere). Yeah, I admit - in those two projects I mentioned, we were all ORM noobs. So we've learned those lessons the hard way.


> Approaches like "let's inherit from ORM class and add business-related methods", in my experience, lead to total disaster.

I don't disagree, in fact I'd go further and say that data and logic should not be coupled, but this is the Active record pattern which is far from the only way to use an ORM, most ORM's won't even support this pattern by default.


Moreover this is a terrible pattern. I would never use an ORM like that. An ORM implemented using the DataMapper pattern is so much better.


The best ORM ever, F#'s FSharp.Data.SqlClient. A very thin layer that let's you statically program in SQL in your app. But I typically just use Functions/Stored Procs. But sometimes, for one off things and experimentation it can be nice to write SQL directly in your app.


Relational databases are not simple systems as you say, but they do seem to me simpler to use - especially in the 95% case where a single, large enough machine hosting postgresql/mysql is entirely sufficient.

Key-value stores are "easy", but what I think isn't easy is to reduce your business domain to a simple key-value model without sacrificing promises and gaurantees offered by a good relational database system.


>But especially here, where we're talking about reducing complexity, it feels off to me. PostgreSQL and MySQL seem to me like incredibly complex packages. SQL, the language, is not easy to master either; most programmers I meet know mostly basics. On top of that, there's a long ongoing history of security malpractice.

PostgreSQL and MySQL are very complex, but the complexity is entirely contained. Both are well-tested and reliable, so developers can deploy them without worrying much about them.

I would disagree about SQL being difficult to master. The basics are all most people need, and are not at all difficult to learn. The more advanced stuff (e.g. CROSS APPLY) is not necessarily standard across implementations, and can usually be replaced with application code.

>When talking about reducing complexity, CouchDB and Redis are far easier alternatives, in my humble opinion, though they go slightly against 'use the tools developers know'.

I can't say I'm that familiar with CouchDB, but Redis is entirely inappropriate for most SQL use cases. It's a key value store, and is not meant to do any sort of advanced queries.


Minor aside; I don't think the complex bits of sql are things like cross apply - that's almost no different from a join, especially if you're from a non-sql background where "joins" are typically statements+loop-equivalents and typically hierarchical and ordered. If people have difficulty with cross apply, they're just not trying.

Of course, if you regard sql as something you'd rather not "waste" time on, of course you're going to find those kind of subtle distinctions confusing - sort of like how people think css is difficult.

The more reasonably "complex" bits are the update visibility semantics, i.e. which transaction isolation levels mean what in various scenarios.

That's really complex, and it's truly somewhat unique to sql in that most alternatives simply don't bother trying to solve those problems at all - that can be a bad thing, but it is simpler.


Postgres and MySQL aren't overly complicated for simple use cases.

A lot of other technologies drop complications like foreign keys which you won't miss when you start developing software but it gives guarantees you will miss sorely when you start seeing inconsistent data 6 months in.


The expensive and complicated thing about these is deployment and maintenance. But then, you could instead pick sqlite and switch to big one when needed.

Sometimes it is good to get a pickup truck ahead of time, but often a smaller less versatile car will suffice. But not quite a motorbike.


Databases are often the only stateful component in a system - statefulness is inherently complex.


Go with SQLite if you can get away with it. It's a library, not an external engine, and databases are stored inside normal files, which makes a lot of things easier if you're building a standalone app (as opposed to server-side software).

SQL is well worth its time to learn. It's a good DSL for relational data. Most programming languages used for regular code are not very convenient with relational data. As for its security issues, this is actually simple - one has to respect SQL as a real programming language with its own syntax and grammar, instead of resorting to idiocies like gluing strings together in an ad-hoc manner.


I would say SQL is easier to master than most functional or procedural languages. The true issue, in my experience, is that it is different enough most developers don't want to take the time to learn it beyond the basics, much to their detriment.


I think you're forgetting about practicality and not reinventing the wheel.

I'm going to use a car analogy here. Modern cars are incredibly complicated machines. They're also generally very reliable, thanks to about a century of development and engineering, and (relatively) inexpensive thanks to economies of scale.

If I want to transport myself to work on weekdays, and transport my girlfriend and maybe another friend on a weekend trip, and carry a bunch of groceries once a week, I can do all of that with a standard 5-person car. It isn't completely optimal for any of those tasks: it has more space than it needs for any of them, especially the weekday commuting. For my daily commute, I don't even reach highway speeds because I live close to work, so the engine is seriously overkill.

So I could have a custom-designed vehicle for each of these use-cases. Each vehicle would be a little less complex than my current car. But that's a lot to maintain, and would surely be far more expensive and less reliable, since each one is a one-off, requiring custom design and engineering, special parts, etc., and not benefiting from the economies of scale and engineering resources that a mass-market car gets. So instead, I just go buy a ready-made car and use it, and it works great and I'm happy.

Is PostgreSQL overkill for a lot of uses? Probably so. But it's designed to be used for all kinds of different tasks, and while it may not be quite as efficient for any of those tasks as some custom-designed solution, it's far more flexible, and it's readily available, plus it's benefitted from an enormous amount of engineering and debugging that a custom-designed solution would not. Things like CouchDB don't have nearly the number of users and amount of development, so while they may make sense for some tasks, the fact that PostgreSQL has more lines of code does not necessarily mean it's less reliable, in fact the opposite is likely true, just like an off-the-lot Honda or Toyota is likely much more reliable than some custom-designed car that someone built in their garage or some high-end limited-production exotic car like a Ferrari or Bentley.

The reason for using an off-the-shelf solution is because it's fast and easy and reliable. It doesn't matter if you're not making use of 80% of the features or capabilities. And software isn't like cars or engines; hard drive space is nearly free, and except for certain applications you're not likely to see a significant downside to just using a standard SQL database versus something more tailor-made. The main problem is cost (like with Oracle), but with PostgreSQL or MySQL this isn't an issue since they're free (and Free). It also helps that they use a standardized query language which makes them much more accessible.


I've found that as soon as you change your data model and now need to deal with old data, document stores get just as complicated but require more custom solutions.


> TDD is good for fast feedback in some domains,...... It hears like, "IDE is good for fast feedback in some domains. " If someone saying that IDE is only suitable for GUI application but not the software he is writing, he probably means that "look at me, I am the old school tough guy." It's about the people dislike the tool but nothing about the domain.


Having loosely coupled microservices has benefits for maintenance as well as scaleability. If you are quickly iterating on a product and have it up in a rough state then you can easily work on seperate parts without having to worry about effecting the whole.


It can go both ways. If you mess up modification of the microservice such that it breaks other services that relied upon it, you quickly get into ceremony that a small team might struggle with. A monolith might have had the problem solved faster.


You're right of course, but it can also go the other way. If you break something then it seldom brings the service down, and often the breakage is very visible and can be seen by queues backing up. You can restore the broken service without having to redeploy the system as a whole.

Like most things in life there is no right answer, with tools like Terraform you can build a very complicated microservices system with not much effort, but only if someone on your team is experienced enough. If you're in a small team it's probably not worth the effort of learning the techniques and putting them in practice. We hate premature optimisation after all.


Quite possibly the first time I have agreed with everything in a post on HN! Just because a new paradigm exists doesn't mean it will solve all of your problems and will likely introduce several more!


In short, ask the question "when is practice X useful?" instead of "is practice X a good idea?"

Shorter version: Cost/benefit.


Continuous integration is a good thing. Back in the bad old days you'd have three people working on parts of the system for 6 months and plan to snap them together in 2 weeks and it would take more like another 6 months.

Agile methods are also useful. If you can't plan 2 weeks of work you can probably not plan 6 months.

When agile methods harden into branded processes and where there is no consensus on the ground rules by the team it gets painful. The underlying problem is often a lack of trust and respect. In an agile situation people will stick to rigid rules (never extend the sprint, we do all our planning in 4 hours, etc.) because they feel they'll lose what little control they have otherwise. In a non-agile situation people can often avoid each other for months and have the situation go south suddenly. In agile you wind up with lots of painful meetings instead.

Also I think it is rare for one language to really be "best for a job". If you want to write the back end of a run of the mill webapp, you can do a great job of that in any mainstream language you are comfortable in.


> Agile methods are also useful. If you can't plan 2 weeks of work you can probably not plan 6 months.

Hmmm. I was just thinking the opposite yesterday. I'm a performance engineer working closely with two teams. One doing Agile and the other basing on wikis and Adhoc in-person whiteboard discussions. I find the non agile team more productive, efficient and dare I say happy. The Agile based team makes me sit in on their daily scrum meetings. Although every one uses it to sync up on their dependancies, it just drags for an hour almost every day. I can visibly tell the devs walking out of the room spend more time worrying about "velocity" and "organisation of work" than the money making work that needs to be done. It almost feels like the agile process gives them "one more job" of picking the doable things from the list of stuff that needs to be done so they look better than their peers with better velocity.

Simply put, I was thinking if Agile is just not a good method when you can strive for good leadership and a healthy collaboration among individuals of the team?


> I can visibly tell the devs walking out of the room spend more time worrying about "velocity" and "organisation of work" than the money making work that needs to be done. It almost feels like the agile process gives them "one more job" of picking the doable things from the list of stuff that needs to be done so they look better than their peers with better velocity.

Classic symptom of managers using agile as a (micro)management tool. Velocity, burndown charts, etc. are meant to be used by the team as a self-calibration tool. Managers do not get a say in what they think the velocity should be, either for the team or for individuals. If they do so, they create an incentive (let's be blunt, an overwhelming incentive) for the team/individuals to game them and that way lies madness.

(As an aside, the best response I've ever seen to this type of dysfunction is a team who simply decided to retcon the charts on the fly to make the work committed to match the work done. Management was happy that the burndown chart was right on target, developers were free to be fully productive instead of worrying about what their velocity looked like; it was a win-win solution all around.)


I've seen Jira tickets about creating Jira tickets :)

In large lines I agree with this comment. Micromanagement of Agile teams is detrimental. Implicitly, the message is that managers should leave their teams to work in peace?

The question I have: assume you are that manager. You have 5 agile teams working on 5 client projects. One team seems to get work done much slower than the other teams. What do you do? (And how does one actually track progress of an agile team to begin with? Story points can vary wildly across teams).


> And how does one actually track progress of an agile team to begin with?

You've hit upon the key question: you want a progress metric that's in-line with productivity. IMO, assessing that comes down to evaluating whether functionality/code delivered (at a high level) per sprint is reasonable, evaluating whether the task breakdown the team is operating against is sensible and whether tasks are being accomplished in a reasonable amount of time relative to their difficulty, the skills of the person doing them, etc. In other words, the evaluation needs to be specific and include the circumstances: Saying "Why did implementing XYZ take longer than one would normally, even taking into account ABC?" is going to result in fixing the real issue whereas saying "Why is your velocity number so low?" is going to result in "fixing" the number.

That, in turn, requires the manager to either possess solid software engineering skills or have access to someone possessing those skills who can make the assessment in their place. And, yes, it's a lot more work. But, as has been amply documented, attempting to manage off a single number (a self-reported number, no less), simply doesn't work.


> I've seen Jira tickets about creating Jira tickets :)

So long as it's one:many.


Or one to one where the first is a spike that will take time to determine what should go into the second (discovery, learning, experiment, further estimations). You would not want to commit to a unit of work without some idea of what it entails in Agile. The alternative is Programming Mother Fucker where you just dive in and see where it takes you. The business side usually prefers more predictability. The developers usually prefer to Just Getting It Done (tm).


I was once on a project where an "agile" team, at the behest of their managers, held a sprint "to improve velocity." I kid you not.

I will add that I was not on that team. Our scrum master, who was excellent, shielded us entirely from the management madness.


What was the idea behind this? Spending some dedicated time knocking through some accumulated crud? (I think people call it 'technical debt' these days). Is that a bad thing?


Nope. That I could have respected and would have made some sense. It was code for "the team will work longer hours and over the weekend so that an arbitrary number is higher."


Hmmm. Weird. That sounds like people trying to engage in "growth hacking". I wonder if someone's bonus was tied to it.

If they were really serious about "velocity" (for whatever reason; some are legit), they'd divide by man-hours, not weeks, anyway, and have actuals going back 3+ months (6+ is better) to baseline their sitrep before they started knob-twiddling.


This was just "work long hours to get the project back on track" being communicated as "improve velocity" with about as much success as you'd expect and less understanding than you're projecting.


It doesn't sound like the 'Agile' team is doing standups right.

There are either too many people in the room, or people are talking too much, and probably about the wrong things.

And, if they are doing standups wrong, I question how much else they are cargo-culting.


Generally claiming that a person is doing the method wrong when the method brings no percieved benefits raises a red flag on the general applicability and value of the methodology itself on the problem the person is trying to solve.

There is no one true way to organize development of software and generally shoehorning dogma without proof of value is counterproductive.


I think you can absolutely diagnose an hour-long daily standup as "wrong". Standups need to be limited to 10-15 minutes. That's one of the key tenants of standups: they need to be short and focused.


Yes, regardless of the process framework used having the team meet daily for an hour is pathological and probably is indicator of deeper issues. Which just doing agile "more by the book" wont fix.


Are you really can stand for hours at the standup meeting? :-)

Standup daily meeting must be short and focused. Moreover, standup meeting is for developers only. No project manager, no customer, no QA team. Just developers talking about their problems. Everything else deserves it own separate, hour long meeting once a week or two.


Just want to stress the importance of having stand-ups be developer-only. Do not let project managers, product owners, stakeholders, clients, and so forth become part of it. The developers won't be ready to start working when they take their seats because a lot was said in standup without actually saying much.


Eh, it's valid in this case. Ten minutes standup is formulated as being opposed to a full meeting. The full meeting was there first, standups are supposed to be different. If it's turning into the full meeting it's worth calling that out.


I'm sorry, but no, it doesn't. Sometimes people just do things the wrong way, and there's nothing that can be done other than to tell them they're doing things wrong.


> Generally claiming that a person is doing the method wrong when the method brings no perceived benefits raises a red flag on the general applicability and value of the methodology itself

Agree completely. When incredibly common complaints about a methodology are raised and the response is "you're doing the method wrong" you start to err towards dogma and a "No true Scotsman" approach to management.

Sticking to any technique, including agile, no matter what and with no modification is a symptom of a problem. Projects are unique, there's no one size fits all way to manage them.


Yes, everyone in there knows that scrum meetings should be short, but somehow those meetings take long, because everyone thinks their questions needs answers and their dependabcies definitely must be resolved.. so, an hour it goes.


The point of a standup isn't to get answers or resolve dependencies - it's to make others aware of them.

The first thing we do after stand ups is have a bunch of quick one-on-one or small group meetings.


Our meeting format was for each person to say a) What they worked on, b) What they were about to work on, and c) If they needed help with anything. Our rule was that you could ask for help as long as you were just scheduling a time to meet after the stand-up. It worked really, really well.

Then we got a new scrum master whose desired format was to talk about each item on the Kanban board each day, even the ones that weren't being worked on.


It's status meeting. Developers unlike status meeting, so they invented their own meeting: daily standup meeting.


This is really bad and a major failure on the part of the scrum master. The point of this meeting is to help the team sync on what they are doing and what they plan to do.

It should take 10m at most. Any issues (blockers, dependencies, etc) should be taken to separate ad-hoc discussions to let the team get back to work.


If there is no room for syncing up and every issue is to be taken offline, one should question if it makes sense the stand ups should be kept in sync. In my team we do standups asynchronously over Slack, it's a great and non obtrusive way of updating each other and we achieve the same thing.


It's not the same.

The standup performs an important social function of making everyone involved in the work of the team. People have to face each other, feel accountable to, and feel supported by the group.

And of course there's no room for synching up: The morning standup is meant to facilitate those synch-up meetings, synch-ups that don't have to involve everyone.


>>> The standup performs an important social function of making everyone involved in the work of the team. People have to face each other, feel accountable to, and feel supported by the group.

This is a fantastic description. And it goes a long way towards explaining why standup (even if I'm not directly involved) leave me feeling so uncomfortable.


Kick project manager out of standup meeting and you will love them.


If that makes your standups better, you need a competent project manager.


You are right, problem is in project manager head, which abuses standup for status updates because of lack of understanding of purpose of standup meeting, or because of lack of discipline (developers may provide infrequent updates in git/jira). It's hard to debug development process by email. However, if developers will do their standups properly, then their development process will fix itself. (Or project manager will fire the most active one. ;-) )


Assuming this is scrum or something similar then if "[the standup] just drags for an hour almost every day" then they're not really doing it right.

I've been in well run Agile teams - and they're wonderful. I've been in badly run "Agile" teams and they're soul destroying. Either way agile is not the problem (or, I dare say, the solution).


One thing I've observed in (badly-run, I think) Agile teams is big standup meetings, where if anyone starts a discussion or even asks a question (rather than just reporting status) somebody immediately says "offline!" -- i.e., have that discussion after the meeting.

I can see that the motivation is to avoid wasting the whole team's time on a discussion that only needs two or three people; but suppressing discussion can hurt too, as it stops people learning about tricky issues outside of their immediate work area.

It would be helpful to have some rules of thumb to show when you're doing Agile wrong. Probably those exist already -- anyone got a good link? And probably "too many people in the standup meeting" is a good rule of thumb!

Dragging on for an hour sounds absolutely awful. I'd even say more than about six people is too many.


The point of a standup is to learn that there is a tricky issue outside of your immediate work area, and to know who's got expertise on it. That way you know who to contact if "outside" becomes "inside". The actual details, you hash out in a separate meeting with just the people involved. "Offline!" is absolutely the right response if a standup starts veering into technical details.

I had one team of 10 that had a problem with our standups extending into half an hour once. We resolved that we'd make the standup one minute shorter each day. After a month and a half, we had it down to a one-minute standup (6 seconds per person). It was still useful, though a bit extreme - I'd target about 5 minutes for a 10-person standup (30 seconds per).


I've been in a situation where we did agile with 30 people. Standup took 10-15 minutes.


Did it work well?


Extremely. You'd take ten seconds to say what you had to say. Sometimes you'd say "I need help with SQL", say, and someone would say "I'll help", and you'd be done.

But we had a real agile guru on the team. We didn't do "Agile Methodology" exactly; we did Extreme Programming, and we kept tweaking it. Sometimes he'd say "let's try changing our approach in this way for the next two iterations, and see how it works out". We'd do the experiment, and keep the changes or not. We kept hacking and experimenting with the process, in a controlled way, but never in a "this is how it's done" way.

So if you have an "Agile is the one right way" person trying to run your team with big-A Agile, and he/she wants to do 30-person standups, you're probably in trouble...


Sounds great! Keeping track of how well the process itself is working, especially, and being willing to continually tweak it to fit the team and project.


One doing Agile and the other basing on wikis and Adhoc in-person whiteboard discussions.

The ad-hoc approach also sounds quite agile (at least with a small 'a'). It's certainly closer to Agile than to Waterfall, assuming they didn't do a big design up front before writing any code.

I think the ad-hoc agile approach can work very well with a good team. But Scrum fans always seem to warn against cherry-picking just the bits of Scrum you like and not using the whole process.


But Scrum fans always seem to warn against cherry-picking just the bits of Scrum you like and not using the whole process.

But of course. If you just cherry-pick and experiment, then you won't have any reason to pay an expensive expert to tell you how to do it right!


> I think the ad-hoc agile approach can work very well with a good team. But Scrum fans always seem to warn against cherry-picking just the bits of Scrum you like and not using the whole process.

I'm a big Scrum fan (when it works), and my biggest takeaway is that it's exactly meant for cherry-picking and modifying. The best team I've ever been on was one where we were all using Scrum for the first time. We were constantly trying to mold it to fit us best, and it ended up looking nothing like the original model of Scrum. It was also the only time I've ever been on a Scrum team that did proper retrospectives, which I think is the biggest point!

Pretty much every other team has either ignored it ("Why do we need to discuss Scrum, it's in the book and laid out for us.") or merged it with the Review, so that managers, stakeholders, and people outside the team are involved in that. And no one wants to suggest changes or raise complaints with outsiders watching.

Too many people seem to read a book about Scrum, memorize all the concepts and rules and abide by it, without reading any of the justification behind it. If you swear we need story points, and they need to follow a fibonacci scale, but you can't tell me why story points are better than estimating hours, you're doing it wrong (and then points always get fucking conflated with hours anyways). If you understand that story points are just one way of estimating a task's effort relative to other tasks, and that relative estimates tend to be easier to make, and scale better with all the other estimates when things change, then you're allowed to make the call of whether story points are best for the team, or a different estimating system, or none at all. Even better than someone understanding that, everyone on the team should understand that and be able to weigh in.


Depends what you cherry pick. One of the key assumptions of Scrum is colocated teams and (implicitly) engineers who want to understand and think creatively about the domain problems. Without those you have my personal guarantee you will fail


I'm not a huge fan of Scrum, but there's a grain of truth in there. If you're forced to use the whole thing, it's harder to creatively misinterpret the underlying spirit by e.g. having one hour "standups" where everyone is sitting down, having a backlog that covers a year of work in excruciating detail, or estimating in hours then using those estimates to fire people.


Is everything else between these two groups completely equal? I seriously doubt it is, in which case I don't think it's fair to make any conclusions that hold weight.

This is one of the problems I have with these sorts of things. My company went Agile about two years ago, and lots of people like to rant about how much better everything is now and how much more productive we all are because of it. Except we actually have no way of knowing whether it made any difference at all.


Sorry, I should have made it clearer. I ranted likea personal thought than a definitive statement. The teams work on different projects. The diversity and experience of its members are different. They are not strictly comparable.

But, looking at both teams from above, it feels like the non agile team is very simple and it works. The agile team is more complicated and works only on paper.


From my personal experience: experienced teams can thrive with almost no methodology and an ad-hoc process because... They had experience with other processes and can see the good and bad in them.

I still advocate agile for less homogeneous teams or in situations like other posts have highlighted but a team of more senior developers with a working process that is open to be improved (one of the cornerstones of agile) will thrive with less churn than when forced into a by-the-book agile process.


For me Agile is by definition an ad-hoc process just one with guiding principles for how to go about organising it. The problem comes with formalised methodologies based on Agile which are treated as a one size fits all approach for any team.


1 hour daily meeting sounds horrible (and dysfunctional) whatever the development life cycle looks like.


Agile is a very loaded word. One meaning of Agile is a very specific kind of process, the other meaning and perhaps closer to the original manifesto is what you're describing with the "non-agile" team.


Nowhere does Agile say you have to do standups or measure velocity :) At some point the team that's inventing its own process will find it stops working, what's important is if they can identify when that happens and find new ways of getting things done.

http://agilemanifesto.org/


Scrums and stand-up meetings are mostly a waste of time. Scheduling frequent milestones is not.


A "1 hour daily standup" is not agile. The point of a stand-up is just that... everyone can stand because the meeting is so short. Ideally 15 minutes max.


It's annoying when people get really dogmatic about having to stand up in the stand-up meeting. I know it's supposed to remind and encourage everyone to keep the meeting short, but in my experience that simply doesn't work.


Agreed, enforcing standing up but not brevity is the worst way to do standups, and a clear sign of pure cargo-culting.


Maybe not. But have a culture that the meeting really is that short, and let people sit if they want to.


Agreed. Don't force people to stand, keep it so short and sweet so that people want to stand.


The reason it is referred to as a stand up is because it is short and you stand up for the whole thing. An hour long meeting is just that an hour long meeting. Something is not working right in that agile situation which is why they aren't happy.


Scrum != Agile. Heck its sounds if your doing scrum wrong anyhow.

Kanban board, prioritization, CI + CD, automated tests is probably about as much agile as most companies need.


You are not doing agile.

In a daily scrum you cannot have conversations, its just everyone stating 3 things: What I was working on yesterday, what I will do today, and whether I need help today from someone. For a team of 10 people (a large agile team!) it should not last for more than 15 mins.

I guess other parts are broken too, if they dont even know how to do a standup.


Just because someone says they're using Agile doesn't actually mean it. Your non-agile team sounds much closer to the actual goals of "Agile".


Seems like people think using Agile and using your brain is an either-or kind of thing. It's not magic.


Agile is not a method. It's just a buzzword.


> Continuous integration is a good thing. Back in the bad old days you'd have three people working on parts of the system for 6 months and plan to snap them together in 2 weeks and it would take more like another 6 months.

Also extremely importantly is it brings you:

* Working tests. If you make changes and forget to or don't run all tests, the CI server will catch it and make you aware. You still have to write (useful) tests, of course, but that's a discrete problem.

* Entirely kills the excuse "Well, it builds on my machine". This means no undocumented dependencies, and the entire build is scripted.

* "Release builds" are a non-event. They just happen all the time, and your "release" is just the latest build that includes the changes you want and passes testing. This removes situations where there's only a limited set of people (eg: one) that can do a full release build.

Doing CI early on is much simpler. Aside from being beneficial from day 1, it is much easier to incrementally add to your build script/environment as needed than to try to create a script later based on a complicated manual process.

Not having CI is not nearly as bad as not using source control, but it's in the same ballpark.


"Agile methods" as a term has zero meaning at this point. Talk about the specific things you do to make things work well instead of lumping them under the heading "agile".

The issue of people refusing to coordinate and working independently on two useless things for months at a time is an issue that transcends any popular software terminology. The role of a good manager is to see that units A and B are not in sync and resolve that. That's true in all disciplines. You can't attribute this to "agile" just because you meet every 2 weeks.


I think it's the nature of how a company implements agile, rather than agile itself. I have a client that subcontracts a lot of work to me (my work is mostly gathering business requirements and BPI).

They use "agile" but I have budget and timeline constraints on every project.


I find agile works and works well when you limit the definition to "do what you can to avoid waterfall".

Where it starts to be about how you conduct meetings then it tends to fall to pieces.


No, it's not just you and yes, we often do overcomplicate software development.

It's been that way long before agile methodology or microservices though. Complexity-for-the-sake-of-complexity EverthingHasToBeAnAbstractClass frameworks have been plaguing the software development business since at least the 1990s and I'm sure there are similar stories from the 80s and 70s.

It's hard to find a one-size-fits-all easy method for not falling into that over-engineering / over-management trap. I try to focus on simple principles to identify needless complexity:

- There is no silver bullet (see "microservices"): If the same design pattern is used to solve each and every problem there probably is something amiss.

- Less code is better.

- Favour disposable code over reusable code: Avoid the trap of premature optimisation, both in terms of performance and in terms of software architecture. Also known as "You aren't gonna need it".

- Code means communication: By writing code you’re entering a conversation with other developers, including your future self. If code isn't easily comprehensible again there's likely something wrong.


I think the tendency to over-engineer is a symptom of retrofitting an assembly-line 9-5 shift onto the creative process of writing code.

You sit a guy there 5 days a week for many years. He has to look busy, he has to do something with all of that time. He's not going to get paid if he writes the code in the most simple, concise, and straightforward way possible and then goes home until they're ready to make a new feature two weeks later. He has to sit around and make up something for himself to do.

Contrast with side projects. I have many simple weekend projects that continue to work well and provide their promised utility years later. Because you just write what you need and stop, you don't get sucked into the disastrous complexity spiral that every company-internal software project ends up as.

The other factor here is that people need some signal to say "I'm good at my job" (because no one can actually tell). That signal has to go to colleagues, superiors, and peers outside the workplace. People therefore invent artificial complexity or take intentionally convoluted approaches so they can sound fancy. In the most extreme cases, this is a conscious decision designed to block out "competitors" (colleagues). In many cases, it's a subconscious way to ego-stroke (and to mix in a little bit of variety per point one above).

This is especially true when a household brand like Google or Facebook pushes out some new esoteric thing; everyone wants to see themselves as a Google-or-Facebook-in-waiting and it makes it easy to pitch these things to the bosses, when the fact is that the kinds of things that work at large public companies like Google are probably not going to work in small companies.


Thank you for shining a light on the psychological side of this discussion. I like to highlight psychology when I have these discussions with peers because too often technical folks view the world through technology lenses instead of human ones.


Very accurate and poignant.


> Less code is better.

I'd change this to "Write as little code as necessary, but no less." The problem with "Less code is better" is that some folks use that as justification to write clever one-liners that are difficult for other developers to read. That is not better.

That aside, I agree with everything else you said!


> some folks use that as justification to write clever one-liners that are difficult for other developers to read.

Seriously - Please, if you do this, stop it! You're just slowing everyone else down.


There's also a tendency to "architect" things so that common things can be reduced to one liners. A common one is to create form generators via attributes. Things always end up being way more complicated this way.


> Favour disposable code over reusable code: Avoid the trap of premature optimisation, both in terms of performance and in terms of software architecture.

Some people call it "premature generalization". Relevant C2 page: http://wiki.c2.com/?PrematureGeneralization


The day I learned about premature generalization is that day my speed as a developer jumped by a factor of three. When you're generalizing, it's easy to overlook how much time that takes.

My rule these days is that, in most cases, I shouldn't generalize before the same code exists in three places. Why not two? In my experience, things that are done twice aren't necessarily done three times. But things that are done three times are likely to be done a fourth.


Also, when you've got something that's the same in two places it's hard to tell if it's really the same concept or just a "rhyme" that might later evolve into two distinct things.


> Favour disposable code over reusable code

I prefer this way of putting it over YAGNI. This makes it sound more like the trade-off it is.


It's interesting to work alone on a big-ish project with no one telling you what to do and not having to explain anything to anyone. It easily feels ten-times more productive (in terms of accomplishment), but then again it won't have a business case and one doesn't get paid, either.

I think I'm least productive in open source (again, in terms of felt accomplishment), because if one isn't the sole maintainer (like above), then it's a pretty safe bet that few changes take less than a certain baseline (eg. 1 hour) -- someone always has a nitpick, CI always takes it's sweet time, oh, did we discuss yet in which branches we wanna merge this? Ah, please avoid puns in documentation and comments. Do we want this? Can you write this differently, like ...? Did you manually test this or that scenario ...?

(Now this also has advantages in terms of stability, quality and consistency -- but it's also obviously far, far less efficient)

On the clock it's more like "Meh, change that and that, otherwise it's good, so merge it after these changes and tell ops to put it in prod"


100% this, applying occam's razor to software engineering, the simplest solution is most often the best one.

It always amazes me, the instinct by engineers to overcomplicate things. They shoot themselves in the foot and curse when the inevitable subtle bugs start rolling in.


One of my fav tech talks ever (and I watch a lot of tech talks) is Alan Kay's "Is it really 'complex'? Or, did we just make it 'complicated'?" It addresses your question directly, but at a very, very high level.

https://m.youtube.com/watch?v=ubaX1Smg6pY

Note that the laptop he is presenting on is not running Linux/Windows/OSX and that the presentation software he is using is not OoO/PowerPoint/Keynote. Instead, it is a custom productivity suite called "Frank" developed entirely by his team, running on a custom OS, all compiled using custom languages and compilers. And, the total lines of code for everything, including the OS and compilers, is under 100k LOC.


I can't understand why people don't refer more often on Mr. Kays message. To be bluntly uncharitable and only half kidding, I do understand why consultants don't buy into it. Simpler systems that are less fragile mean less work.


Employees and management don't buy into it for the same reason that consultants don't buy into it. It means less work. Less opportunity to sound smart, seize control, and/or ego stroke. Less variety to break up the work-week's monotony.


Ego is one of the larger problems I've seen over the years. This usually shows, as you mention, with someone trying to sound smart. The irony here is that I've consistently found -- both inside and outside the software industry -- that the smartest people in the room are the ones who can speak about complex topics in a simple way.


It's a dilemma, because "it takes one to know one". While a few smart people in the workplace may be able to appreciate a brilliant dilution of an extremely complex topic into something approachable, most will not understand the starting complexity and just assume it's an approachable topic.

This is fine and everything, but it's bad self-promotion. If you want your bosses to give you a raise, you need them to think that you have a unique, difficult-to-acquire skillset and that it's worth going to lengths to keep you happy.

Unfortunately, modest behavior rarely results in recognition. Bombast is a very effective tool, and at some level, you always have to compete against someone.


Well, it depends. I personally don't feel the need to self-promote to get a raise. I'll probably lose out on a few raises or promotions because of that, but I make a good amount of money and I'm good at what I do. That's enough for me.


This is a fine position to take, but it demonstrates one of our pervasive social problems. People with the humility, modesty, and judgment to make good decisions are frequently passed over because they don't feel the need to lead people along or "prove" their value, whereas clowns frequently realize they have nothing except the show and actively work to manipulate human biases in their favor so that they'll continue to climb the ladder. This works very well. The end result is that good people end up hamstrung by incompetent-at-best managers, and they can take down the ship.

The dilemma re-emerges as one asks himself whether it is right to sit by and allow the dangerously incompetent to ascend based on mind games.

My answer used to be "Yeah, I'll just go to a place where that doesn't happen". I no longer believe such places exist.


Engineers don't buy into it because it's not cool. Complex systems are cool. It goes back to the phrase "well-oiled machine". Swiss clocks. People standing around a classic car with its hood open. A complex system of things working just perfectly is super cool, and fixing them when they break is a popular pastime.


Yeah, this is what I'm getting at with the variety thing. I think that good talent tempers this tinkering impulse when a potential breakage could imperil production. They learn that as fun as complexity can be in the right context, having to lose a weekend staying awake until 5am on Saturday night/Sunday morning trying to fix something stupid cancels it out.

Having a lab and doing experimental stuff is great, but choosing to stake your company's products on it should be a much weightier consideration. In practice, we see that this weight is apparently not felt by many.


I wonder why did he chose to built demo applications, instead of a powerful and useful development tool that has strong value somewhere ?


Seems like the resources he had to work with in the VPRI project were pretty limited. It will be interesting to see what his team comes up with now that they are working with SAP and YC.

So far, I know about this: https://harc.ycr.org/project/

Hopefully, they're shooting for something like this: https://www.youtube.com/watch?v=gTAghAJcO1o


Something missing from this entire discussion is that developers have a hard time understanding what are truly best practices for their product they are creating/maintaining. It is a bold assertion to say that everyone understands all the different nuances in creating software.


Alan just complicated his own laptop by not using which is proven to work.


No, he made a point, which no one would have believed without his example.

The whole vertical software stack sitting on top of the hardware of a PC is generally considered a massive towering best with layers of abstractions, and armies of programmers needed to implement and maintain each layer. To say that this does not need to be so, would be taken as theoretical, impractical nonsense without any proof. Which actually would be a valid position, because doing software is so hard that generally you can't guarantee will something work without actually doing it.

So, yes, to make that point, and to have it taken seriously, he really needed such an example.


That is absurd. You can't write an OS in 100KLOC.


http://www.projectoberon.com/ is released and small enough for a book in the dead-trees format.


In case you're not joking, MINIX is much smaller than 100KLOCs.


Check out stats on the kOS/kparc project: https://news.ycombinator.com/item?id=9316091

There is also a more recent example of Arthur Whitney writing a C compiler in <250 lines of C. Remarkable how productive a programmer can be when he chooses not to overcomplicate.


Everything I saw in the link looks like K, not C. Do you have a link to the C compiler done in C language?


1) False dichotomy. Developer familiarity is one of the most important metrics for choosing "the best tool for the job".

2) Conway's Law applies in reverse here: If your organization consists of a lot of rather disjoint teams, then microservices can be quite beneficial because each team can deploy independently. If you're one cohesive team, there is not much benefit, only cost.

3) Depends. If you have a well-designed distributed system, it can be amazingly resilient and reliable without introducing much administrative overhead. (From my experience, OpenStack Swift is such a system. Parts may fail, but the system never fails.) There are two main problems with distributed systems: a) Designing and implementing them correctly is really hard. b) Many people use distributed systems when a single VM would do just fine, and get all the pain without cashing out on the benefits. See also http://idlewords.com/talks/website_obesity.htm#heavyclouds

4) Continuous integration was not meant to help with complexity. Its purpose is to reduce turn-around time for bugfixes and new features. If your release process is long and complicated, the increased number of releases will indeed be painful for you. Our team sees value in "bringing the pain forward" in this way. Your team obviously puts emphasis on different issues, and that's okay.


I find microservices can help in just keep everything small and focused. I know you can do this with a monolith. But having a process boundary really enforces it.


I find that the boundary creates operational headaches. A function call won't time out, deliver a 502 error, have authentication/authorization issues, require load balancing, etc. etc.

A REST API will.

Plus, once you've debugged a problem that involves crossing 5 microservice boundaries you'll start to wonder if it was all worth it.

Monolith is also a wrong (and somewhat derogatory) word to describe a non-microservice architecture. There's nothing monolithic about loosely coupled code running on the same machine.

I really think that microservices are a hack to deal with conway's law in large corporations. Operationally it's inefficient but it fixes a nexus of technical and political problems when the correct boundary is picked.


Well said. "Monolith" is a pejorative that prejudices any discussion of said code.


Except most applications are monolith. Monolith code can still be loosely coupled. However it is harder.


No, not at all.

The only difference I noticed with respect to rube goldberg (what you call "microservices") systems and coupling is that tight coupling between components of rube goldberg systems was much more painful: particularly debugging across multiple service boundaries.


until you mix it with other legacy part of the system then it will be pain in the neck


> 3) Depends. If you have a well-designed distributed system, it can be amazingly resilient and reliable without introducing much administrative overhead.

About that points. Scaling developments across many devs is a very difficult problems. It just doesn't scale.

A lot of organizations recruit/grow a lot of people and they try to get away from the human scaling problem by having them work independently/in their corner.

This allows people execute a lot of stuff... usually the same stuff 10 times, with little coordination or collaboration between them to the point it can be felt it in the resulting system(s).


Many of the programmers I have worked with actually love complexity, despite trying to convince others (and most likely themselves) that they hate it.

Advice tends to be cherrypicked to suit an agenda they already have (with your example on microservices, the vast amount of resources saying they're very difficult, should be driven by a monolith first approach, and solve a specific set of problems is largely brushed under the rug).

I think because our industry moves so fast there's a fear of becoming irrelevant. Ironically companies are so scared of not being able to employ developers that they're also onboard with complicating their platform in the name of hiring and retention. I think this is down to the sad truth that most developer roles offer very little challenge outside of learning a new stack.


> I think this is down to the sad truth that most developer roles offer very little challenge outside of learning a new stack.

This is a gem observation from this thread. In my own tech sphere the first thing developers are talking about with each other is the new x,y,z lib or framework they're using to accomplish something relatively banal. There's still a lot of work out there that really boils down to basic CRUD and reporting at the end of the day, and developers naturally begin to invent complexities on top of that CRUD to make the work interesting and challenging. I'm absolutely guilty of this first hand.

I've found personally it also doesn't help that past work on projects e.g. large Rails apps that were never architected well turn out to be such nightmares to work on. The memory of the end state of these projects lingers with developers as they move onto the next piece of work, and they're inclined to say "no that doesn't work" and pick up shiny new-tech to do the old job instead.

As a side analogy: most small business construction jobs, e.g. building a timber frame house, don't involve the builders arriving on site and are stumped by the challenge of how to put up the framing for the bedroom walls - there's also very little challenge in these projects, yet the reward is in the completion.


> There's still a lot of work out there that really boils down to basic CRUD and reporting at the end of the day, and developers naturally begin to invent complexities on top of that CRUD to make the work interesting and challenging.

I'd go so far as to say that _most_ work today (at least in startups) is building CRUD apps. The technology has changed, but the work hasn't. Inside of building CRUD apps in Rails, we now build them in React.


> This is a gem observation from this thread. In my own tech sphere the first thing developers are talking about with each other is the new x,y,z lib or framework they're using to accomplish something relatively banal.

The thing is, there is so much other stuff to learn/fix. Where I'm working now most of my day is trying to understand the layers and layers of code they built. If they'd stuck to simpler code I could be learning more about the business and the users workflows. I could be improving the UI with that knowledge, I could be optimizing the business. Instead all my efforts go into understanding the code base.


You've thrown together a bunch of buzzwords and asked if we are over complicating things.

Buzzwords can mean freaking anything. I've seen great Agile teams that don't look anything like textbook Agile teams. Microservices can be a total clusterfuck unless you know what the hell you're doing -- and manage complexity. (Sound familiar?) CI/CD/DevOps can be anything from a lifesaver to the end of all life in the known universe.

So yes, we are over complicating software development, but the way we do it isn't through slapping around a few marketing terms. The way we do it is not understanding what our jobs are. Instead, we pick up some term that somebody, somewhere used and run with it.

Then we confuse effort with value. Hey, if DevOps is good, the more we do DevOps, the better we'll be, right? Well -- no. If Agile is good, the more Agile stuff we do the better we'll be, right? Hell no. We love to deep dive in the technical details. If there aren't any technical details, we'll add some!

Software development is too complicated because individual developers veer off the rails and make it too complicated. That's it. That's all there is to it. Throw a complex library at a good dev and they'll ask if we need the entire thing to only use 2 methods. Throw a complex library at a mediocre Dev and they'll spend the next three weeks writing 15 KLOC creating the ultimate system for X, which we don't need right now and may never need.

It has nothing to do with the buzzwords, the tech, or software development in general. It's us.


It never seems complicated when I am doing my own side work for some reason. There are no design meetings, no hours tracking, no arguments on best practices, no scrum, no testing frameworks, dev ops, etc. I do use git and minimally create bash scripts to simplify repetitive tasks for deployment but its just a huge contrast to working in teams where something simple takes about 50 times longer.

I think keeping things as simple as possible and always going for that goal will increase velocity overall. Everything should be subject to scrutiny for promoting productivity and open to modification or removal. I know there is a balance where you have to increase complexity in a team environment but keeping friction as low as possible in terms of process and intellectual weight couldn't hurt.

The most productive place I've seen so far is a huge athletic brand I worked for where they kept teams at max 5 people in mini projects. This forced the idea of low overhead and kept the scale of management needed small. The worst place I worked for in terms of unnecessary complexity is a well known host, although it is the best place to work in terms of people, hired offshore that has a one-size fits all mentality and layered in as much shit as possible to slow down development to a mud crawl. I don't buy into process over productivity.


One of the things that helps when you're developing your own projects is that you can single-handedly decide to ruthlessly cull parts of the project that take lots of time but provide little value, and you (probably) have decent insight into what those are. You're also probably not at a scale where doing certain really crappy, slow parts of the job can pay off, so you can skip those.

Default form elements with some basic, nice styling to fit your theme? Form done in one hour. Special snowflake version of the same thing from the design team, which has no idea what the platform can and cannot easily be made to do, but the client is absolutely in love with? Two days, a third party dependency or two, some extra environment-specific bugs to track down later, and generally increased fragility (so more time lost in the future). This has slowed you down now and increased the resources required for the project indefinitely. But the client looooooves it.

Support Android pre-5.0, at the cost of 20% more development time, a pile of extra bug reports, an uglier, harder-to-maintain codebase, and a much much longer testing cycle, for a side project? Hell no. Client says that will cost them $4 million/yr not to support those? Ugh. FINE.

And so on, and so on.


Continuous Integration is (with a reasonable test suite) one of few elements of software development that I would consider almost essential for any long running project. It's just too useful to have continual feedback on the quality of the system under construction. (And this is before bringing in micro-services or any other complicating architectural pattern.)

Where I might agree with you more are on points 3 and 4: 'Advanced reliability' and 'Microservices'. While I have no doubt that these are useful to solve specific problems, I think as a profession we tend to over-estimate the need for these things and under-estimate the costs for having them. To me this implies that there needs to be a very clear empirical case that they support a requirement that actually exists. I'd also make the argument that the drive for microservices within an organization has to come from a person or team that has the wherewithal to commit resources over the long-term to actually make it happen and keep it maintained. (ie: probably not an individual development team.)


I think the "learn to code" movement as well as overly-technical interviews for developers are partly to blame for this. It's well-known that developers are tested on how to do something that's considered technically difficult, such as abstract CS problems or a complicated architecture, but they are rarely asked why certain tools, practices or architectures should or should not be used. Comparative analyses to make objective recommendations between different solution alternatives are also rare in my interviewing experience, but they are one of the most valuable skill a competent software engineer should have.

I don't agree on point 4 though - CI can be something as basic as running a monolith's tests on each commit, which makes sure that builds are reproducible (no more "works on my machine").


No. You are correct. Honestly I think you can solve a lot of that by following on from one of Deijkstra's core priniciples: Seperation of Concerns.

When you practice good seperationof concerns, specific choice in different areas can be more easily fixed later. It requires having decent APIs and being thoughtful on the interaction of different components, but it helps immensely in the long run.

Microservices are one way to practice seperation of concerns, but it can also be practiced in monolithic software as well, by having strong modular systems (different languages are stronger at this than others).


Well, yes, we are overcomplicating it. Except on the parts we are undercomplicating... And I still couldn't find anybody that can reliably tell those apart, but the first set is indeed much larger.

1 - Do not pick a new language for an urgent project. Do look at them when you have some leeway.

2 - Yep.

3 - There's something wrong with your ops. That happens often, and it is a bug, fix it.

4 - If CI is making your ops more complex, ditch it. If less complex, keep it. In doubt, choose the safest possible way to try the other approach, and look at the results.

5 - Do not listen to consulting experts, only to technical experts. The agile manifesto is a nice reading, read it, think about it, try to follow, but don't try too hard. Ignore any of the more detailed methodologies.


Much of the problem in the things you mention is that those things are specific solutions that have been confused with goals. I.e., "we're supposed to build microservices" is a horrible idea, as opposed to "given this particular situation a microservice is a great fit".

Understanding the possible benefits and drawbacks of any solution is important. It's important in whether or not that solution is selected, but also to make sure that the implementation actually delivers those benefits.

It's very common in our industry to use "best practices" without understanding them, and therefore misapplying the solutions.


As you've intimated, most people have a very superficial mental model.

Facebook == respected tech brand == someone I should copy. The end.

Guy I know uses Cassandra == developed by hot tech brand Facebook == cool by mental association with Facebook.

Guy I know uses MSSQL or Oracle == developed by crusty old Evil Empire Company that cool people don't want to work for == bad.

Conclusion: We must use "big data" so we can be like the cool people -- err, because we really have some big data.

This doesn't sound like the outcome we'd expect from technical people making these decisions, but we can obviously see that it's what we're getting.


I am working in huge non-IT company as a software developer. I guess that is what gives me a totally different point of view on your lessons:

1) Without a unified technology stack and a common framework we would not be able to build and maintain our applications. We decided on C# as it works best for us. Currently we are 5 developers. Not a single one of us has ever written a line of C# code before entering the company - learning the language from ground up enables us to pick up patterns that our colleagues who joined the company earlier found to be best practices.

2) If you are not introducing a whole new stack with every micro service that you develop the devops costs are quite low.

3) I agree with you on that - I think redundancy always introduces more complexity. However there are systems that handle that job quite well (e.g. SQL Server). For application servers we use hot-spares and a load balancer that only routes traffic to them, when the main servers are not reachable. This works for us, as all our applications are low traffic applications.

4) Continuous integration works brilliant for our unified stack. In the last two years we went down from 1d setup + 20min deploy to 10min setup + 20s deploy.

5) We use agile methodology whenever possible and it works like a charm. However we had a lot of learnings. Most recent example: Always have at least one person from all your target groups in any meeting where you try to create user-stories.

Planning our software architecture has been a key element in my teams success and I do not see a point where we are going to cut it.


1. What problem are you optimizing for? "The job" encompasses code, but it also encompasses staffing. It's a lot easier to hire Java developers than Scala developers. In a leadership role, your responsibility isn't just the day-to-day code - it's the whole project.

2. Microservices vs monoliths is a see-saw. You build a monolith, find it's a brittle, incomprehensible hairball, and you break out microservices. You build microservices, find that operational headaches are killing you, and start consolidating them into monoliths. Which kneecap do you want the bullet in?

3. Fix what breaks.

4. Continuous integration is vital. But it needs to be evolved along with the system. There's this thing I say... "Have computers do what computers do well, have humans do what humans do well". Handling complex and repeatable behavior (i.e. builds and test suites) should absolutely be automated as much as possible. Think continuous integration sucks? Try handing it off to humans for a while! You'll learn whole new levels of pain.

5. All process is about (or should be about) specific, discrete communications issues.


> Which kneecap do you want the bullet in?

Funniest thing I've heard all day!


That's the fun of working with me. I say funny shit!

I occasionally refer to the final steps of a project as "bayoneting the wounded" too.


Yes we are over complicating it, but that it primarily about trying to take what is essentially an artistic process and turning it into a regimented process (a known hard problem).

Rob Gingell at Sun stated it as a form of uncertainty principal. He said, "You can know what features are in a release or when the release will ship, but not both." It captured the challenge of aspirational feature development where someone says "we have to have feature X" and so you send a bunch of smart engineers off to build it but there is no process by which you can start with an empty main function and build it step by step into feature X.

That said, it got worse when we separated the user interface from the product (browser / webserver). And you're rants about microservices and continuous integration are really about releases, delivery, and QA. (the 'delivery time' of Gingell's law above).

These are complexities introduced by delivery capabilities that enable different constructions. The story on HN a few days about about the JS graphics library is a good example of that. Instead of linking against a library on your computer to deliver your application with graphics, we have the capability of attaching to a web service with a browser and assembling on demand the set of APIs and functions needed for that combination of client browser / OS. Its a great capability but to pull it off requires more moving parts.


Link to the post about the graphics library please?



- 1) Choose languages that developers are familiar with, not the best tool for the job

95% of the time, a language that your developers are familiar with is the correct tool for the job simply for that reason! There are cases where it is not the case but those involve special case languages and special case systems. If you don't know what special case means then you're situation is almost certainly on that list.

- 2) Avoid microservices where possible, the operational cost considering devops is just immense

"If your data fits on one machine then you don't need hadoop ..." Same thing applies here. Microservices have place and putting them in the wrong one will bite you bad.

- 3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

Then there's probably something wrong or limited with the deployment that needs to be reviewed (2 node when you need a 3 node cluster, bad networking environments, etc.) If you have a reasonable setup with solid tech under it, deployed per specs then this should not be true. If, on the other hand, something is out of whack (say running a 2 node cluster with Linux HA and only a single communication path between them) you're going to have problems and the only way to fix them is to get it done right.

- 4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

I'm not sure about this but, if your deployment system requires CI you have a problem. An individual, given hardware and assets/code, should be able to spin up a complete system on a fresh box cleanly and in a reasonable timeframe. (Fresh data restores can take longer of course but the system should be runnable barring that.) If this requires (i.e. it can't reasonably be done manually) something like an CI script or ansible/chef/etc. script then you're deployment process is probably too complex and needs to be re-evaluated.

- 5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

Agile is commonly used to gloss over a complete lack of structured process or a broken. Even with Agile there should be some clean process and design work that goes into things or you're hosed.


4: if my stack requires a message broker to run, how is setting one up manually supposed to be better to using the ansible scripts.


For me, the trinity of development as a solo developer seems to be:

1. Writing code while using as many useful libraries and tools as possible to avoid recreating wheels

2. Continuous integration set up early on to handle the menial work and to let me concentrate on 1.

3. Constantly evaluating and researching what technology is available and newly appearing to give me an edge, because having an edge is never a bad thing in this field.

Agree with some of what OP said, especially with methodologies become hindrances and HA tools becoming points of failure.


I've seen the addition of unit testing is a big cause of complexity. Previously simple classes now have to be more abstracted in order to unit test. Add mocks, testing classes & test frameworks. Some unit tests are handy, but I dont think it justifies the additional complexity. For the apps I write I'd like to see more emphasis on automated integration testing and fewer unit tests - so we can write simple classes again.


The "threat" of having to add unit tests should force developers to write their classes and components in a way that is easy to reason about. In particular: * put as much functionality into pure functions * depend less on statically-linked globals * import all significant collaborators across seams that can be mocked. * keep state in a small atom, rather than strewn about

If you write code like that, you get many of the benefits of unit tests, whether or not they are actually written.

Perhaps it's a good idea to write a test harness (e.g. larger integration tests) for old so that you have a reasonable chance of catching it if it becomes broken, and focus on writing new code in a testable fashion.


The threat of having to write unit tests will not have nearly the effect of actually writing them.

It's too easy to fool yourself into thinking you've written testable code.


In my experience, it's usually not the existence of unit tests themselves that's causing an issue, but that most of them are badly written. One telltale sign is when writing the unit test becomes overly painful (like too much code setting up mocks), it usually means that your class is not simple enough or has too many dependencies.

Proper unit testing also complements integration testing in that corner cases can be handled at the unit test level, therefore reducing the amount of integration test code which arguably is much more brittle, runs slower and more complicated to write.


Many unit tests are just written to test code, which is at best irrelevant. At worst your codebase is 2-3x bigger and more abstract than it needs to, where useless tests keep code alive and useless code keeps tests alive. Test functionality, as close to the promises given to outside consumers as is feasible. Be it API or UI for other people/projects/services. This is the stuff that needs to work (and thus often need to be stable). No-one cares whether a function deep down inside the code, used a part of the implementation of promised functionality works. Delete it if you can.

Only case where I'd support "unit tests" as typically practiced (small units, isolated functions/classes) is around core competence (defined as narrowly as possible). But then I'd argue that this functionality should be put into a library anyways, which is used by products codebases. And then the tests are tests for the functionality promised to the products.


I'm not arguing against writing integration tests, they are as important if not more important, as you've said. Maybe I've only seen badly written ones, but my issue was against integration tests that check for example if this ever so important, but hidden, flag is being set properly after an API call when that can be checked at the service level. Someone eventually decides that flag is unneeded, and a whole host of tests fail and someone has to dig several levels deep to figure it out.

I guess I shouldn't have used the word 'brittle', but this is what I was thinking of.

And of course, I think unit testing anything and everything is absurd and not a good use of developer time.


I don't think you can avoid meta-debugging. That is, debugging your asserts or tests that you hoped would detect bugs instead of being the bug. Sometimes because more realistic tests unveil a bug, sometimes (as in your example) because underlying code functionality has changed. This is unavoidable but also often enlightening. To my mind, it's even okay if most of your bugs are meta - because these are usually very fast fixes, and it probably means you have a lot of checks. But by the same token, I would agree with you that all such tests have to be well-written, not mailed in, for just the reasons you give. It's too easy to assume that writing tests is somehow a fairly trivial task. Until you end up debugging the test.


I've seen this too. Unit testing was mandated from on high and it's something developers never learned to do properly. My telltale sign is more than one logical* assert. A test should usually be only a few lines of code, a dozen lines should be all that's needed for 99% of tests.

*Logical meaning only test for one thing, not a single assert statement. So testing for null and testing if a value is set is fine, but testing if 10 values are set correctly is not.


If integration tests are brittle, then the integration is likely to be brittle. In my opinion this is something to fix, not workaround by testing lower down.


Done well, unit tests are invaluable. I'm a relative late-comer to unit testing, but can attest to its value.

The discipline of unit testing forces me to think about what risks I'm introducing with my new code.

Unit tests drive better design - smaller classes, looser coupling, better separation of concerns, functions that don't have side effects.

Best of all, unit tests reduce regressions. I can't count the times my test suite has prevented me from introducing a bug in my app.

I can refactor code with much more confidence than if I did not have 400 unit tests checking my work.

Most recently, these tests proved their worth when upgrading my app to Swift 3.0.


My first reaction to your (very thoughtful) review is that #4 seems out of place.

CI can be a way of enforcing the simplicity of the others - it can be a way of tunneling the build process into assuredly straightforward steps and preventing individual team members from arbitrarily (or even accidentally) adding their own complications into build requirements.

Other than that, I think you are definitely on to something here.


There's this book that I've been mentioning around here called Elements of Programming https://www.amazon.com/Elements-Programming-Alexander-Stepan... that makes exactly this claim, that we are writing too much code.

It proposes how to write C++-ish (it's an extremely minimal subset of C++ proper) code in a mathematical way that makes all your code terse. In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book https://www.youtube.com/watch?v=4moyKUHApq4&t=39m30s Another point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible. Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol). https://smile.amazon.com/review/R1MG7U1LR7FK6/

This style might become dominant because it's only really possible in modern successors of C++ such as Swift or Rust that have both "direct" access to memory and type classes/traits/protocols, not so much in C++ itself (unless debugging C++ template errors is your thing).


Have you looked in the STEPS program by Alan Kay? Trying to recreate modern computing setup from the OS up in 20k lines of code...

http://www.vpri.org/pdf/tr2012001_steps.pdf

"If computing is important -- for daily life, learning, business, national defense, jobs, and more -- then qualitatively advancing computing is extremely important. Fro example, many software systems today are made from millions to hundreds of millions of lines of program code that is too large, complex and fragile to be improved, fixed, or integrated. (One hundred million lines of code at 50 lines per page is 5000 books of 400 pages each! This is beyond humane scale.)

What if this could be made literally 1000 times smaller -- or more? And made more powerful, clear, simple, and robust? This would bring one of the most important technologies of our time from a state that is almost out of human reach -- and dangerously close to being out of control -- back into human scale."

...and of course if you haven't seen it, you'll want to check out the Forth guys who want to do everything with 1000 times less code:

http://www.ultratechnology.com/forth.htm


I'm aware of this but Alan Kay's work and this seem to be orthogonal. Alan Kay talks about reducing real systems that have compilers, inputs etc whereas Elements talks about like the day to day ways of writing code. Alan Kay might come up with a new keyword whose semantics magically lets you cut out 30% but Elements shows you that if you make your types behave certain way, generics will let you cut out a lot of code.


I would counter that this appears to be a repeatedly emerging consensus, including Stepanov, Kay, Simonyi, and a number of other "greats", that an approach that involves some degree of metaprogramming, guided by domain problem, is the way forward. They differ on terms - cooperating systems, model-driven, intentional, generic - and focus - whether to create new syntax, or to guide the creation of specific algorithms or data structures - but they aren't debating the power of the approach.


I recently picked up this book. Seems quite good, but I'm also mathematically inclined (there's a lot of abstract algebra in there).


The only way to have any sense of a good or solid development platform or lifecycle is, to me, to look at your specific situation and tailor everything to your deliverables and needs. Doing anything because of industry trends or academic pontificating will lead you towards the solution someone else had success with in a different circumstance.

Microservices work fine in some situations, agile works fine in some situations, but until you find that you are in one of those situations trying to bend your deliverables to meet a sprint-cycle or some other nauseating jargon will cause, as you put it, over-complication or just poorly targeted effort. (It can also cause enough stress to dramatically affect your health, I know better than most)

Those moments of solidarity between product and effort are real gems that I've only recognized in hindsight.


You are right. Agile, languages, CI, devops are all tools not solutions to problems. Blindly applied, they will not get the results promised.

First focus on identifying the primary job to be done: build a valuable piece of software with as little effort as possible given your current team and existing technology.

Second, consider how valuable the existing software is and whether it really needs to be rewritten at all. Prefer a course that retains the most existing value. It is work you won't have to repeat.

Third, choose tools that maximize the value produced per hour of your team. CI, Devops, Microservices, Languages all promise productivity and reliability benefits but will incur complexity and time costs. Choosing the right mix is part of the art of software management.


You're right, though you should end most of your comments with "for us".

We've been burned by the microservice hype, and it took a while for us to realize that most of the touted benefits are for larger organizations. These "best practices"" rarely include organizational context.


Fatal problems that hit start ups seem left-field, but they are baked into the design choices we make, often without discussion - because they seem part of "current accepted wisdom".

My major issue for startup software development is that often software is developed too discretely - with a utopian 'final version' in mind. Developers don't think holistically enough - they focus on details at the expense of design. "current accepted wisdom" is intangible, ever shifting, whereas the failure of a system is very real and can lead to loss of income etc...

Lots of start up companies don't design systems with humans in them, they write code as if it was a standalone thing - they often leave out the human bits because they are hard to evaluate, measure and control - variety of skill, ideas, approaches, mistakes, quality of life etc.

In my experience, this variety (life) often comes back to bite companies that can't handle eventual variance because of poor system design - not because of a choice of platform / provider / software etc.

I have been reading a lot around the viable system model (VSM) for organising projects. It seems to fit with what my view on this is. I am trying to implement a project using this model currently.

https://en.wikipedia.org/wiki/Viable_system_model


As everyone is saying: do what is reasonable and useful.

E.g let's make an online shop.

It has browsing, purchasing and admin sections.

Browsing is simple. Query dB and show html. It's probably the most used as well and needs to be reliable. Having as different service means admin section could break while users are still able to browse. Same for payments. Sometimes it's crazy complicated. I think of microservices as big product feature boundaries that can work independently. A failure in one doesn't affect the other.

Continuous integration: once you have your tests, and some auto deploy scripts, you have an engine. You push code, tests auto run, a live staging is created for latest code, you play with it. Looks good? Merge with Master. It's deployed to production. The idea is deployment is effortless and you can do it multiple times a day just like git push. Tests just don't only have to be unit. We run integration testing features on dummy accounts periodically from different regions in the world on production. This means you are alerted as soon as something breaks. Fast deployment and great telemetry mean you can always revert to last known good state easily.

Investing in tests is a pain but it pays off in the long run. Especially if you have other developers working on same code base.

Just don't over do it. I believe these ideas came from pain developers actually faced and they used then to solve it. If you're not feeling the pain or won't feel it then you don't need the remedy.


I think in many cases complexity just comes from lack of experience and poorly understood requirements.

I've had my fair share of cases where I ended up implementing something needlessly complicated, only to later realize my approach was terribly misguided. I'd like to think I'm slowly improving on this as time goes on.

The software world has a big discoverability problem. Even though I know there's probably prior art of what I'm working on, I don't always know where to look for it.


Honesty.


I think you wanted to say: "Are we simplifying things in software development?" All of the points you have made are actually simplifications of what might be the optimal solution.

Imagine the solution space as some multidimensional space where there is somewhere an optimal solution. The dimensions include the habits of your programmers, the problem you are trying to solve, and the phase of the moon. Microservices, a special form of redundancy, continuous integration, agile development are all extreme solutions to specific problems. Solutions which are extreme in that they are somewhere in the corner of your multidimensional solution space.

They are popular because they are radical in the way they conceptualize the shape of the problem and attempt to solve it. Therefore they seem like optimal solutions at first glance when really they only apply really well to specific toy models.

Take e.g. microservices. Yes, it's really nice if you can split up your big problem into small problems and define nice and clean interfaces. But it becomes a liability if you need too much communication between the services, up until the point where you merge your microservices back together in order to take advantage of using shared memory.

Don't believe any claims that there is a categorically better way to do everything. Most often, when you see an article about something like that, it is "proved" by showing it solves a toy model very well. But actual problems are rarely like toy models. Therefore the optimal solution to an actual problem is never a definite answer from one of the "simplified corner case scenarios" but it is actually just as complex as the problem you are trying to solve.


1) No way. Absolutely not. Not if what you're building is intended to last. Any language/ecosystem you choose has costs and benefits. You will continue to pay the costs (and reap the benefits) long after your developers could have become fluent in a language.

Certainly the language your developers already know is better than one they don't, all things being equal. But your rule is way too simplistic.

2) Of course. Avoid every complex thing where possible.

3) This means the cost/benefit ratio was not considered closely enough when planning these features. Again, avoid every complex thing where possible.

4) this is a strange one. Most people doing CI are not building microservices. CI is really more about whether you have different, independently moving pieces that need to by integrated. Could be microservices, could be libraries, could be hardware vs software. If you only have a single active branch everyone's merging into regularly, you're doing CI implicitly. You just might not need it automated.

5) take what you can from the wisdom of agile, and then use your own brain to think. And don't confuse agile with scrum.


1) Sounds like there's a lot more to the story.

    * Was the "best tool" what the devs thought it was?
    
    * Was it something they would hate using? Say, Java for Perl devs?
    
    * Was there a steep learning curve? An obscure language?
2) How big is the system? How complex is the business? How ops-friendly are the devs to start with?

3) You (or someone) must know how much system failure would cost.

4) CI can help with your devops, but its main point is to help with your software quality. See #2.

5) Totally agree, though you can also try being agile about "Agile" and taking just whatever parts work for you.

My $0.02 anyway.

(Aside: years ago I worked on a team doing ad-hoc semi-agile, which worked pretty well. I'm 99% sure I could have double our output and launched a management-consulting career if I could have credibly held the threat of Real Corporate Agile Scrum over their heads. But that was before the flood. One of them works for Atlassian now, ironically enough.)


Though perhaps it's considered a component of 2), one could add Docker/containerization. I've watched folks spend weeks and weeks getting Docker setup for a service that probably didn't need to be containerized at all. And then once it's Dockerized, introspection/debugging/etc... seem to become much more difficult.


And what sort of services were those that didn't need to be containerized?


I agree with you, but not fully.

1) Well, this is only a case if the project is short enough that its not worth switching. Learning a new tech for a team takes months, only switch if the project is taking years.

2) Again, only use them for bigger (>2 years lifecycle) projects

3) Depends on what you need. We build a full stack apps with around 99.95% uptime (a few hours of downtime/year) in around 3 months of architecture dev time, this was good enough for us. Getting more would have hugely increased dev time, but this number was good enough for us.

4) Disagree. You can build simple CI pipelines in a matter of weeks, which will pay for themselves in a few months thanks to better uptimes, happier employees, shorter release times. Again its only needed if your project lasts for more than a year.

5) Disagree. Agile is very good, if someone knows it well (takes a few days to learn). Its not needed for very small teams (<6 people), they can self-manage.

But I think there are problems:

- People getting hyped about the latest trendy stuff. Use bleeding edge/new tech for hobby projects not for money making.

- Do not switch technologies unless really needed, dont fall for the hyped library of the week.

- Do not use a dynamic language for any project that will have more than 5K LOC in its lifetime.

- Do not overengineer. For example if the code is clean, works, but has that ugly singleton pattern its OK. Dont introduce the latest fancy IOC framework, just because you read it in the clean code book that its better.

- Unit test are overhyped. Use them for critical components on the server, and thats it. IMO the hype about them is because dynamic languages scale so badly that you need test otherwise you're fucked. Rather choose a well proven statically typed language, a good IDE, and take code reviews seriously.


"Perfection is Achieved Not When There Is Nothing More to Add, But When There Is Nothing Left to Take Away" - Antoine de Saint-Exupery

IMHO, it takes technical and personal maturity to come to the conclusion above. Good architecture (or software or dev process or anything) should only have/contain the simplest things that are necessary.


> Avoid microservices where possible, the operational cost considering devops is just immense

Is it, though? There's more complexity due to more moving parts, sure. But being able to solve issues by just issuing a "scale" kubernetes command in the CLI is priceless. As is killing pods with no drama.

However, what are we talking about here? Small business ecommerce? Your monolithic app is probably going to work just fine.

> Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

Systems can and will fail. If you can eat the downtime, by all means forget about that.

> Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

Could you stop singling-out microservices? We have deployed continuous integration with old school rails apps before and it was extremely valuable.

Agree about agile.


>Is it, though? There's more complexity due to more moving parts, sure. But being able to solve issues by just issuing a "scale" kubernetes command in the CLI is priceless. As is killing pods with no drama.

On the contrary, getting to the place where you can issue commands over k8s on a project not specifically designed for it has a very real and very significant cost. Companies are killing themselves trying to do this for no good reason.

Need a new node? Fire up whatever it is that you fire up: Ansible, Chef, AMI, bundle of custom bash scripts, whatever. No need for the massive complexity of k8s.

Specifically, what benefits are you seeing from k8s (i.e., what unique utility does the "scale" or "delete pod" command bring that is not reasonably resolved by less complex solutions)? It's just causing me a lot of frustration right now. I can see Google's need for it. Not having much luck seeing its use in non-Google-scale businesses.

If you're doing a from-scratch thing that you can architect around k8s and think that's more convenient than more traditional approaches and can accept its currently-quite-serious limitations, that's whatever. If you're talking about some tangible objective benefit that most companies need to be able to enjoy here, please do elaborate.


> > Avoid microservices where possible, the operational cost considering devops is just immense

> Is it, though? There's more complexity due to more moving parts, sure. But being able to solve issues by just issuing a "scale" kubernetes command in the CLI is priceless. As is killing pods with no drama.

> However, what are we talking about here? Small business ecommerce? Your monolithic app is probably going to work just fine.

Maybe I just haven't seen enough projects, but every significant criticism of microservices I've seen assumes that "microservices" effectively means "each team does their own thing in a way that greatly increases complexity and maintenance costs of our monolith app."

Of course it's not going to pay off if you're maintaining all your own hardware / servers and each team is using their own stacks, languages, and frameworks... especially if they're all bottle-necked to the same database instance anyway. That's basically magnifying the potential downsides and minimizing the benefits.

Our industry definitely has some "use the new stuff because it's cooler" sentiment, but I think we also have another distinct mentality that shows up a lot.

> "I'm most familiar with hammers. We tried a cordless drill once a few years ago but its battery died and we had to wait for it to charge! The hammer still worked, though. So we stick to hammers and I recommend others stay away from drills."


I think #5 is the most problematic here, and was stated perfectly.

One method I have used successfully is sending surveys to people outside engineering. Send it to department heads and anyone else who seems interested in what engineering does. Ask them if they feel engineering is transparent, and whether they feel important bugs/features get followed up on. Let the responses guide you, and make the minimal process changes you need to in order to satisfy people's real concerns.

One other piece of advice: if certain people seem obsessed with process, it's possible they are poisonous to your whole organization and should be let go. Some people want process to be there to give them work (e.g. "managing the backlog" or "writing stories"), instead of doing actual work like programming or product research.


As someone working at a Scrum company transitioning from PHP "monoliths" to DDD microservices shielded by nodejs gateways and apis and even CQRS/ES on the horizon I will answer yes.

But I guess that'll look cool in our resumes.

I must say sometimes I envy our mobile developers that are a bit immune from all that.



I agree with most of your comments. I think as a fairly new profession we are still finding our feet when it comes to best practices. I don't think there is one system that will work across the board for all trades. I mean I would think it took longer than 30-40 years to work out the best way to plumb, wire a house etc.

Sometimes when estimating work, I think how long would the same project take to build 5, 10, 15 years ago. It's not often that time spent coding today is any quicker than before.

Arguably we get better quality software now with unit tests, better compilers and better tooling. Perhaps I've just got some massive rose tinted glasses on!.


All problems revolve around structure, and as customers want more features, and capital builds, the structures get more complex. So we build even more complex structures to offset the complexity, but now things that were once simple get brought along and become more complex. Eventually the company hits a breaking point and re-invents it's structures to better suit their needs, but these grow in complexity once again given time. It is a never ending battle, and every business is at a different point in their complexity cycle.


Choose languages and frameworks that developers are familiar with.

Microservices are fine if you can rely on shared CI/CD infrastructure and automate execution properly, maintaining rapid build/test cycle times. They start to suck if people aren't familiar and everyone's laptop has to hold their own parallel multi-topology service prom regression test (^releases) every time you change a line of code... developer focus, flow and efficacy will be reduced.

I agree that redundant HA systems are usually not required. In the past it was expensive to get. However, tooling is now so good that with reasonable developers and reasonable infrastructure design, you can get it very, very cheaply if your services are packaged reasonably (CD-capable) with basically sane architecture and your infrastructure is halfway modern. This truly is excellent, because gone are the 1990s of everyone-relies-on-grizzled-sysadmin-and-two-overpriced-boxes-with-failover.

I don't think CI is a plaster, it is a great way to work, but like any tool or workflow is not appropriate in all situations.

We do over-complicate. Methodologies are too meta: programmers are already operating at max concurrent levels of abstraction. Better to incrementally adjust workflow (CI/CD on the workflow for the CI/CD of the workflow!). That's not to say that there's no value to some people thinking at this level some of the time, but Yoda told me "desk with agile literature much, sign of untidy mind be". I think he was right.


I feel like all of this just comes back to judgement calls. You can't pick technologies in a vacuum, and you can't generalize technology choices.

It's not very fair to make these claims without knowing all of the details around the situation. Microservices CAN be a pain, but it might offset a greater pain of trying to coordinate a monolithic deployment. It depends on things like team size, budget, and technology available to you.

This is where I see the disconnect between employers and most developers. "Programming" isn't a job. Your employer doesn't pay you to write code. They pay you to solve problems. The good employers don't care what tools you use to solve the problem, just that you solved it. The bad employers will force you to use technologies and buzzwords that probably don't apply to your situation. You should be able to defend all of your decisions and have good reasons for them.

On the flip side, not everything you try will work - that doesn't mean that it's a bad option, just that it didn't work for your situation. You don't need to have a redundant low-priority memo system because you don't get enough value out of it to justify the overhead of maintaining it.


I think you confused trends for wisdom.

It used to be wise to wear bell bottom jeans and perm your hair. It also used to be wise to wear colored suspenders, or pocket protectors. And shoes with lights in them, and color changing shirts.

Granted, those same weird misguided trends were probably followed by the same people who accomplished everything we have today. I think it's the effort you put into the work that determines its output, not the details of its development.


Point 5 is really insightful. When you read it carefully, it implies that agile "methodology" will soon become the prevalent methodology. Because a successful project is all about managing a massive amount of "specific, discrete, communications issues". And doing so on a daily basis is the best option.

Off-topic note: point 5 is also the way to go with your wife/husband/girlfriend/boyfriend, your kids, your friends, etc.


Interesting idea. I think dinner is the best place for an evening family SCRUM meeting.


SCRUM meetings with cheese and wine. THIS - IS - BRI-LLI-ANT !!!


Honestly, it is probably just you (and your peers).

Quite frankly chances are the team you have sucks at operations, lacks the necessary experience to design complex systems, and probably doesn't do the fundamental engineering to make a reliable software product.

1 - false dichotomy, the best tool is one you have mastered, your team has individuals with 20+ years of development experience on it right? (Probably not)

2 - micro services are supposed to have small areas of concern and small functional domains to minimize operational complexity. Your services are programs that fit on a couple screens right? (Doesn't sound like it)

3 - redundancy's goal is to remove single points of failure, you should be able to kill any process and the system keeps working. (The word critical suggests you have spfs)

4 - CI is a dev tool to avoid merge hell by always be merging. CI is often used by orgs with massive monoliths because of the cost of testing small changes, and too many cook trying to share a pot. Ultimately if you don't have well defined interfaces ci won't save you. (You had well defined published interfaces with versions right?)

5 - agile is a marketing term for consulting services to teach large orgs how to act like small effective teams of experts. (Hint you need a team of self-directed experts with a common vision and freedom to execute it, you got that right?)

Most problems in tech are related to pop culture. Because we discount experience (because experienced developers are "expensive") we get to watch people reinvent existing things poorly. Microservices, soa, agile, ci, these things are older than many devs working today. The industry fads are largely just rebranding of old concepts to sell them to another clueless generation.

Computers are complex systems, networks of computers are complex systems. Complex systems are complex. Some complexity is irreducible, and complex system behavior is more than just a mere aggregation of the parts. People tend to over complicate their solutions when they don't understand their actual problem. They see things they are unfamiliar with as costly and overly complicated (as in your examples above).

Your problem is a culture that doesn't value experience and deep understanding. You and your team will over complicate things because you don't know better yet.


1) Yes, except that you should try some languages sometimes. E.g. if you use Spark in production as a critical part of your system... take the time to learn scala.

2) Is a pet peeve of mine. Theoretically microservices are good, but we don't have a way to orchestrate them. What's lacking (in programming languages terms) is a "runtime" and "debugger" and of course a widely-tested & reliable set of "libraries" for most common tasks. I think it's possible to do something like that, as long as you start imposing some restrictions on what a "microservice" is and how it talks to the outside world. Also, in this frame of thinking, it becomes apparent that "deployment" is actually "programming the system, at a high level". It's not "just configuration", configuration is code - if you moved the complexity from your "code" to your configuration, you just moved the complexity into a language that has very poor tools to work with.

3) My rule of thumb for most systems is "avoid redundancy in the control plane; you can and should have redundancy in the data/data processing plane; don't plan for 100% uptime in the control plane, plan for very short downtimes/ for fast recovery when something goes wrong"

4) My experience is that continuous integration is good for all but the very small teams. For multiple reasons, not just "microservices"

5) "Agile" is horrendously misused, the cargo cult is in full force. It should be about prioritising & doing the important things, reducing overhead to the minimum necessary. It has become an overhead in itself, with "sprints" reduced to ridiculously low periods, meeting over meeting at each sprint, etc.


I think micro-services was your big issue. But yes, getting into the politics of pure scrum, kanban, whatever is a big drag.

DevOps has it's merits and will work well if you're team can stop trying to develop newer better scripts and learn when to say it's good enough. I saw one team revise their scripts over and over for a whole year when they could have been using that guy for new features/bug fixes.


1) Choosing JavaScript for a Math heavy project would likely be a mistake. There are plenty of other examples of picking the wrong language for the wrong job. That's where this statement falls apart.

2) Depending on how you bring them all together, yes this can be true. If you have something like AWS API Gateway, then microservices may be manageable. If you're rolling your own custom solution with something like nginx or haproxy, you're probably wasting a ton of cycles.

3) Again, I tend to agree with this. Premature optimization seems to be the norm these days. Especially when you get devops people involved. Do we need every single layer in our stack to be "highly available" if we have zero users? The answer is NO.

4) Well, this sounds clever, but I'm not sure it really means anything. Setting up something like Jenkins to watch your GitHub repos and build the branch and run the tests can alert you to issues early and really isn't that difficult to setup.

5) Nothing wrong with TDD as long as you don't go overboard. Nothing wrong with standups, planning or retros. Nothing wrong with short sprints.


The rule is there are no rules. The answer is "it depends".

If the only language your developers are familiar with is Ruby and you're developing a real-time, high-performance, system, then you shouldn't write it in Ruby.

If you need the kind of availability/scalability/encapsulation that microservices provide in your application/use-case then you should use them. Don't break you application into micro-services just because everyone says it's a good idea. An Angry Birds app on an iPhone doesn't need to be split up into micro-services running on said iPhone.

If you don't have redundancy and you lose your server then you're hard down. If you're OK with that fine. If you want to continue operation with one server down then you need redundancy. Redundancy doesn't necessarily add as much complexity as you seem to imply.

Continuous Integration is usually a good idea regardless of all other variables. If you have more than a single developer working on a system it's a good idea to keep building/testing this system with every change so you can catch issues earlier. You start very light-weight though with a small team. Even a single dev can do CI, it's not that hard.

Agile is just a buzzword but it doesn't hurt to familiarize yourself with the Agile Manifesto while making sure you're aware of the context in which it arose. It's really mostly about understanding that requirements often change and that we're dealing with humans. Again different projects, team sizes, situations will require somewhat different approaches. Sometimes the requirements are well understood and will change very little. Sometimes you know nothing about what the software will do when you're done.


A certain amount of complexity or complication is required to solve problems. Sometimes, you will undershoot the mark, and not fully solve the problem. Other times, you will overshoot the mark, and create problems in the form of overcomplicated answers.

> 1) Choose languages that developers are familiar with, not the best tool for the job

It's a tradeoff - rampup time vs efficacy once ramped up. It's probably okay to let your devs rock it old school with vanilla Javascript for your website frontend - it's probably not okay for them to try and write your website frontend in COBOL, even if CobolScript is apparently a thing, just because they don't know Javascript.

> 4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

CI is great plaster for all kinds of problems, not all of which you'll be able to solve in a reasonable fashion. Of course, you may have problems which would be better to solve that you're using CI as a crutch to avoid solving - or to simply deal with the fact that you haven't gotten around to solving those problems yet.

In game development, I use CI to help 'solve' the problem of my coworkers not thoroughly testing all combinations of build configurations and platforms for each change. 5 configs and 6 platforms? That's already 30 combinations to test, so it's no wonder...

> 5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

On the other hand, other companies rock "flat" management well past the point it's effective, and may lack any kind of methodology to keep progress on track - which is also problematic.


Keep in mind, as others have said, the "accepted wisdom" is coming out of high-income, high-velocity, technology companies. However, a lot of development is done at companies whose primary business is not software (or not technology). Additionally, many established businesses care less about velocity than hungry startups.

In that case, I think a different set of wisdom applies: 1. Choose languages that are easy to hire for, easy to train for, have lots of 3rd party support, and easy for junior developers to use (and that your developers already know). This generally means Java, C#, or Python (on the backend)

5. First it's important to define "agile" in your context. Agile, in terms of the agile manifesto, is almost always beneficial to the project, although it won't speed it up. Agile, in terms of cargo culting specific artifacts, is often just a waste of time and source of confusion. If your organizational definition of agile is "no project manager needed", then you're in trouble. Good project managers are essential.


Why not all peoples are searching actual software?Its so difficult how many years in making system anyone cannot knowing of all of app or platform or app or plug in or device or codechar all things are supporting each other if one app is not update directdownload from google play store,windos for microsoft store how to give the part way of moving simple all App or Apk have in their basic code ID company .how about they making of you will be use one app or apk and then you sign out App and delete software but your using activity remain in more sure you can next time you reuse same app you will be seen or not your history this is one things but alittle difference in fre game app how about you play inside but you will be exit and then you replay this you can see or not your recent play section if you make log in account connect sure you can making or not resume game section game.all of game have in prevent and resume section already contain but you are not own create you cannot open this background codechar or security for example...


There's a great discussion to be had in scaling your practices to the human factors.

For a solo developer, just breaking things out into modules and massaging the formatting is likely to be a net negative - something you might do once you've accumulated months of cruft and are ready to start handing it off to others or repurposing it for a new project, but also a chore that will get in the way of thinking about the job in front of you right now, a temptation to think top-down planning will come to your rescue. Your advantage is in being able to change direction immediately, and there are a lot of ways to give that up by accidentally following a practice for a larger team.

As a team gets bigger, it's more important to be cautious because of momentum; any direction you pick for development will be hard to stop once it gets going.

At the same time, there are processes and automations that help at every scale, and at the small scale they're just more likely to be little scripts and workflow conventions, not ironclad enforcements.


Well, I think there are 6 points to answer here:

>> 0, Are we over complicating software development?

Yes, in many cases we are over complicating software development. I think a large part of OOP too complex to produce reliable services easily, still possible though. Simplicity is not as popular among developers as it should be. I often run into complex code that can be replaced by 10x smaller code base that is much clearer than the original.

1, Sure

2, I am not sure why you think that, you need services that can few things well and individually scalable units. This used to be SOA (service oriented architecture), and micro-services lately. There are cloud vendors out there who make it super easy for you to run such services for reasonable price on their platforms without a devops team.

3, See point 0. Complex systems fail more than simple ones. Failure isolation and graceful degradation should be properties at design time. The best is to have stateless (no master slave service or registry that is required for correct operations) clusters where you can scale the capacity with the number of nodes.

4, Continuous integration is way older than the term microservices. It contains patterns that a company figured out by shipping code that had to be reliable and it is optimised for frequent changes aka when you developing a new service or product. It is just a way of giving instant feedback to developers.

5, There are so many talks and videos about agile used bluntly is harmful on the web that I think this is a well understood question. Use a method that works for the team, it provides the insight to the business what they are doing and you are good. I use Kanban for almost 10 years with distributed teams (software and systems engineering) and it works perfectly for us.

+1 for simpler code and simpler software


Re: #2, what are some such services?

AWS seems to be the go-to standard but it's amazingly complicated.


You are basically say much of the same as Dan North is saying in his newest take on Agile: https://www.youtube.com/watch?v=iFLBG_bilrg

Agile is dead, long live Agile. The difference now, is that we understand trade-offs. There's no silver bullet and there are no absolutes.


Did people honestly think that Agile was going to be a silver bullet? If so the consultants won.


Can't agree enough!

I actually wrote an article [1] last week exploring single-tenant SAAS architectures because I was annoyed with how complicated our multi-tenant plans were. Was bummed the HN post [2] didn't get any traction because I was really hoping for some critical feedback.

For me, the holy grail is a cost-effective system that doesn't back you into scaling issues down the road and is simple enough to be run by a single developer (on the side) rather than a dedicated team of sysadmins. Pipe dream? Maybe. But it's worth a shot.

[1]: https://hackernoon.com/exploring-single-tenant-architectures...

[2]: https://news.ycombinator.com/item?id=13385474


From what you are describing, it seems like there is more a problem associated with your team than with anything else. Maybe you don't have the right set of expertise in the team and people tend to work better with what they are comfortable with. Microservices/redundancy support/CI at a fundamental level increase the complexity of how you go about things, but they do have benefits. They require a way of thinking and developing that should be a cultural fit for the team for it not to feel like you are constantly fighting the system. One way to get there is to incrementally add these after the primary project is done. When tackling one thing at a time, things end up being simpler and and the need for these things gets into the working habits of everyone better and they are no longer fighting the system.


devops is good stuff. Just apply to the developers the same standards (and typically answers) as you would to your deployment world. You should be able to answer questions like: "How does a new developer get going within 5 minutes?" in the same way that you answer "How do we build and deploy a new app?" and both the local developer and remote system should be debugged and monitored in the same way.

devops isn't bad, and will speed up onboarding new staff, growing, and helps your devs and ops people immensely.

On the rest I'd largely agree with you... other answers may only apply at a certain scale, or complexity, or some other set of parameters that may not apply to you now.

Solve the problem you have now, and the problem you'll definitely have in the next 6 months.

The rest is for the future.


You need to work in an environment devoid of any practices like agile / CI etc and then you would know the difference. It might slow down your progress, but makes up for it with consistency, discipline eventually leading to development of better(reliable) software !


If you keep on following every hype-train yea you will get over complicated software development.


From my perspective, the problem is that others believe everyone else is caught up in the same trends they are. If someone starts to prosthelytize something - whether that's build management, microservices, or even pairing React with Redux by default - individuals start to think it's the "new thing" and adopt rather than critically think about it.

Personally, I tend to shy away from tools unless they seem to do something of significant value for me that outweighs their cost on my development process. The "best tool for the job" is the one that allows me to finish a project in a timely manner, not one whose memory footprint is 10% lower.


It's all about long-term vs short-term. Everyone architects software for the short term, I'd say the industry at large has collectively lost/never had the vision and wisdom to do anything else.

Now maybe if you are a tiny-ass start-up, sure, but for a big established company, this is just bad economics.

Why do we talk about "disrupting" the "behemoths"? Why is everything done in tiny-ass largely-parallel teams? Very few companies have had serious thoughts about programming at scale.

I don't dispute that doing things the right way is often a huge up-front initial investment, but you do eventually get over the hump.


> Everyone architects software for the short term, I'd say the industry at large has collectively lost/never had the vision and wisdom to do anything else.

I think everyone architects for the long term, they just do so poorly. The problem is that architecture has become synonymous with "more layers".


OK, so in the beginning there were no layers. People occasionally wrote a layer but it was common to just say "fuck it", and through it away. As late as the 90s, you read about C programmers writing hash tables all the time, wtf.

Then, somewhere along the way I don't know exactly when, we hit an inflection point where there were some layers that didn't work quite right, but were hard to do without, so we'd try to shim it.

Really good long-term engineering means also ripping up the under-performing layers, attacking the unneeded complexity. This does not mean giving up on abstractions altogether.


Can you elaborate on what you mean by "programming at scale"?


Scale in terms of the number of employees.

I think most people at the org should be Alan Kay called "second order work"—libraries foremost, but also programming tools, etc. Just about any end business goal should be a trivial composition of existing abstractions—if it isn't that's a problem to be addressed.

Work reuse always reflects the dependency graph of libraries and tools etc. In this type organization, I'd expect much bigger (depth and width) dependency graphs than the current independent teams model.

The end result is organizations should become more "agile" as they grow because they have more high-quality abstractions to lean on.

But current practices always put end-goal over process, and thus have no chance of cultivating this efficiency.


Continuous Integration on any project which will be developed for more than 1 year by more than 1 person should provide a positive return of investment.

The rest are debatable, but I feel that the point above is close to an axiom these days.


I've used microservices, but on QNX, where you have MsgSend and MsgReceive, which make message passing not much harder than a subroutine call, and not much slower. UNIX/Linux was never designed for interprocess communication. You have to build several more layers before you can talk, and the result is clunky.

If you're crossing a language boundary, it's often better to use interprocess communication than to try to get two languages to play together in the same address space. That tends to create technical debt, because now two disparate systems have to be kept in sync.


At this point, I think a lot of software development problems are complicated because we are building on a platform that really isn't designed for the apps we want. The web makes everything a lot more frustrating and hard. It complicates testing and requires a lot more process than is justified by the apps. At some point the era of the web will come to and end then maybe we will get a net gui (probably based on messaging) that will hopefully take the lessons of the web to heart.


I think this is where a lot of varied work experience (small / large / old / new companies) is key, because it gives you perspective. You can then ask yourself, "why does this process suck so much, and why didn't it when I worked at X? In my experience, people who come from a monoculture background usually seem to not question dubious software, architecture and methodology choices that end up killing productivity and sanity.


Yeah. If you're going to use languages, methodologies, and architectures without understanding, and without evaluating them for how well they fit your situation, many things will be painful. Don't follow the fads, whether methodologies (Agile), architecture (microservices), languages, or frameworks. Use what's appropriate for what you need to do.


1) The best tool is useless if people can't avail of its power

2) True. Microservices are usually premature optimization

3) True

4) CI is a good idea regardless of using microservices or not

5) You might elaborate this item


No it's not just you. In general, "follow latest trends blindly" has never been a winning strategy in software development at any point in computing history. Now, that is not to say that you never change your tools or methodologies once you've mastered your existing tools. But the new tools/techs need to pass a very high bar before you subject your team to these.


Yes, we are over complicating software development.


Another way of saying this is it is not science.

Usability needs to be applied to more than just the end user experience: but the entire SDLC experience.


The biggest issue I have is the current fashion for functional languages resulting in mixed style code bases. I've been working on established applications written in Java/C#/Python that have OO, imperative and now functional code all mixed together.

If I had it my way we'd choose one or the other but no one can agree which is the best way to write code.


The style takes a backseat to readability and maintainability. Try creating complex object queries in .NET without LINQ; good luck at reading and maintaining that code. I'll take my lambda expressions any day over that, thank you. I remember "the good old days" of C# 1.0 and you'd be crazy to want to go back there.


That is because there isn't one. Analogies for software are notoriously imprecise but... You wouldn't pound in a nail with a sawzall. Trying to be pure with any style leads to awkward code in anything but the most straightforward tasks. Do what works, be like water etc.


It isn't new when I say that it is hard to come up with simple solution.

In most cases people tend to work under pressure, which ends up with problem nicely fitted to tool at hand. You can hardly blame anybody for that. What we are not doing enough is going over "solution" again and again. Solving a problem second time around is always easier.


Yes. What you have discovered is the same epiphany most developers have as they get more experienced and better at their jobs.


I'd like to push back on continuous integration being over-complicated. It's easy to do using off-the-shelf software and it makes life a lot less stressful when you have confidence that your changes are good before landing them in production. It's such a win that I'd set it up even with a 10 person team.


I use TeamCity to automate running my unit tests and generating/publishing NuGet packages to my private NuGet server... and I work alone. It has value even there. :)


> 1) Choose languages that developers are familiar with, not the best tool for the job

The language that you're familiar with generally is the best tool for the job. Most software work can be equally done well (or at least greater than acceptably well) in a number of languages. Not having to learn a new one (or a new framework) is a plus.


All development teams or products are not the same. Sometimes microservices can improve the quality, and sometimes the opposite.

It is important to know why you do some things, instead of applying Hype-Driven-Development.

Do what is best for you and your team, instead of what is best for someone else (with a different product, problem, and team).


Are You a front-end developer? :D

Yes, I think very often we over complicate even simple things. But sometimes it pays in the long run.


Most of the time we're creating complexity when we can avoid it and we're often proud of it.

The problem is that's very difficult to find the right compromise between time, cost, an architecture that can support the growth of the service so either we build something too thin or something too complicated.


Continuous integration is also necessary for bigger projects with many inter-dependent parts. I worked in such a project, we had about 100 developers on it and I just can't imagine how it could be efficiently developed without CI. But for small projects it maybe isn't that critical.


Just linking my comment to the other thread in response to this post:

https://news.ycombinator.com/item?id=13429618

tldr; simplicity is a great virtue and difficult to achieve in practice.


Could also be interpreted as: "devops is not yet mature/lacking tooling".

Don't get me wrong, complexity has grown. Agile is a joke. But, e.g., build systems have been maturing for 30+ years. Their cousins, deploy systems, have a long way to go.



> 1) Choose languages that developers are familiar with, not the best tool for the job

+1

Programmers have affinities for languages. They will work better with some languages than others and they know which languages fit them well. Those are the best ones to use.


No you are absolutely right. 95% of problems in software development are created by the software developers themselves. At least that is my experience having worked in software for 20+ years in companies all over the world.


I think one cause of the problems is that what is good at Google scale is not necessarily relevant for a team of ten people.

I think the lesson here is be critical of "best practices" and think about what will work in YOUR context.


You are correct. We cargo-cult Google and Facebook so much that we forget to apply lessons learned decades ago. People and interactions over processes and tools. There is no silver bullet. You Ain't Gonna Need It.


... I'd agree. Put briefly, if you're trying to save the day, people first.

But when you stop needing to save the day and want to build something will particular properties , you may find that process has to come first.


So... that big list is the lessons learnt at the startup IR the big company? Its really not clear to me.

Same problem with all the comments that begin with "at my last company". Which kind was it?


> lack of communication

You can't talk about lack of communication and blame "devops" at the same time. If there was a lack of communication, you aren't "doing devops."


1/ What language did they choose? why? what made them think language X or framework Z would give them a competitive advantage at first place and what was the result of that choice?


The issue is that solving real problems is hard, but making things complicated is easy, fun, and looks a lot like solving real problems if you aren't paying careful attention.


1) Doesn't always work if you want to target embedded systems or need performance, and all you know are scripting languages with huge overhead like Ruby, JS, Python, etc. Some languages really are better than others.

2) Could say avoid distributed computing if your problem is not distributed. This is more about being a blind follower of the latest hype.

3 & 4) Complicated DevOps are a bad idea in general. Stuff that seems to simplify things on the surface like Docker are actually hiding tons of complexity underneath.

5) To most people, Agile = JIRA = Sprints = Scrum. It's corporate mentality codified, so it's no surprise that a lot of startups avoid it.


Software development goes off the rails because there are no physical materials involved, so there is no built-in limitation to prevent costs from going out of control.


truth is: you're young and you're becoming an experienced developer... You somehow have to go through these stages. In the end, you'll be all right.


Some people have got so used to of complicated architecture and workflow that they are finding your questions odd. Just check comments.


Hey, we gotta eat. If people won't pay for software licenses then we'll make them pay for training and consulting services.


Hi, I'm happy to be posting anon right now. Can someone ELI5 the difference between libraries and packages and a microservice?


I have seen Microservices be the death of a lot of startups / corporations. Proceed with caution.


i think the best way is to start backwards in the future. what are the requirements. then plan towards today. what do you need and when ... thats how i did plan my training program as an athlete. the most important question is what do i need (to do) right now


as a guy whose idea was successfully pitched to a successful tech company of which i am still connected, i'm going to say yes. the classification aspects of specialty training keeps the process from being as fluid as it needs to be in order to be truly game changing rather than merely, whatever expectation is expected.

i know this sounds different to everyone, here's the point,

The User Needs to Use It. The focus is always on everything else. Only when theres' been 'some' success does the user and by user, I mean, the entire field the program is for, is an influence, this lack of empathy keeps any leadership from ever happening when everything is based on 'past successes of other companies' rather than trying to lead effectively.


YES!


Figuring out how to do things simply is remarkably hard. After twenty years of this, I feel like I'm beginning to be able to design simple systems some of the time.

The problem with much "currently accepted wisdom" is that it doesn't explain exactly what is being balanced. "Works for my organization" is the equivalent of "works on my machine." For example,

1) "Best tool for the job" when applied to languages nearly never is a question of the intrinsic merits of a language design. There have been quite a few discussions recently on Hacker News on the virtues of a boring stack, that is, one that everyone else has already beaten on so much that you can expect to hit fewer issues.

2) Microservices are a tradeoff. If you have an engineering team of five hundred shipping a single software as a service product, one of your biggest issues is coordinating releases among all those people without having your services ping-ponging up and down all the time. Microservices are an answer to that. At that scale you've already had to automate your operational troubles, so it doesn't impose that much additional operational cost. If you have an engineering team of ten, then none of this applies to you.

3) High availability, like all concurrency, is hard. Try to write your own code so that it scales horizontally by simple replication and depends on stock components such as Kafka, Zookeeper, etcd, or Cassandra to handle orchestration. In many cases your reliability budget may be such that you can run a single system, automate some operations around it, and be just fine. It's only when your reliability budget doesn't allow that, or your workload forces you to orchestrate parallel work, that you have to go this route.

4) Yes. Nearly all discussion of agile software development that I've seen focuses on rituals without the applied behavior analysis underlying them. For example, a standup meeting has a small set of goals: establish a human connection between everyone on the team on a regular basis; air things that are blocking individuals in a forum where they are likely to find someone who can unblock them quickly; have everyone stand up and take responsibility for what they are doing in front of their team; and serve as a high bandwidth channel of communication of important information (the build is going to break this afternoon for an hour, etc.). If those outcomes are being achieved in other ways by your group, then there's no reason to have a standup. If you're doing a standup and it's not accomplishing one or more, you need to revise how you do it. Human behavior and interaction is something to be designed and shaped in an organization. What works in a team of three with excellent communication may not work in a team of ten or fifty or five hundred.


looking at some react-todo-demo and its dependencies - complicating? not at all!

J2EE will soon look like a reasonable thing.


YES WE ARE-


it is not just you, but we are hopelessly outnumbered.


There is a lot of BS in software development. Always has been, probably always will. Everything is a tradeoff. Understand the tradeoffs that you are taking, listen for the principles, and you can ignore most of the noise.

On to your questions.

1) Choose languages that developers are familiar with, not the best tool for the job

How familiar developers are with the language is part of what determines what is best for the job at hand in a real organization.

It isn't the only factor. For example if you're doing something new (to you), doing it in the language that you find wherever you are learning it from makes sense because you'll be more likely to get help through complex issues.

That said, do not underestimate the support advantage of using a consistent toolset that everyone understands.

2) Avoid microservices where possible, the operational cost considering devops is just immense

See https://martinfowler.com/bliki/MonolithFirst.html for emphatic support.

If you go the microservices route, think ahead about predictable challenges with debugging failures 3 calls deep, and plan in advance for monitoring etc tooling to solve it.

3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

As the old saying goes, DBAs are the primary cause of databases going down. Reliability is not something that you just plaster on top blindly. An systems are good at finding failure modes that you never thought of.

4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

No. Continuous integration is actually a fix for developers checking in clearly broken code and then nobody discovering it later. That said, it does little good without a number of other good practices that are easy to ignore.

5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

This one generated the most discussion. I would say sort of, but you went too far.

Any set of poorly understood principles, dogmatically applied, is going to work out badly. Agile is actually a set of good principles that addressed a major problem in the common wisdom back in the day. But the pendulum has swung and it is often applied poorly.

That said, there are other problems in organizations which are prone to, "poorly understood principles, dogmatically applied"...


Yes


1997: I created my first website on Netscape Navigator. I was 10.

2007: I created a textbook trading RoR web app. I was 20.

2017: I'm struggling to create my first front-end website on Chrome and I haven't decided on the back-end. I'm 30.

The barrier to entry is indeed very high and no signs of slowing. I blame the explosion of low-interest capital from VC's fueling this fracturing.


Building a website doesn't have to be complicated. You build a Rails site 10 years ago. You probably used jQuery, if you used JavaScript at all. Why can't you do the same today?

The real problem here is that not enough developers understand that "just because you can, doesn't mean you should." Once more of us get a handle on that, life will be better.


> You build a Rails site 10 years ago. You probably used jQuery, if you used JavaScript at all. Why can't you do the same today?

You can and I do. However, I am thinking of using polymer or vue.js as I think they are a much lighter candidate than react.js & angular.

The power of marketing is underrated in the developer circles.


If you still use Rails and jQuery, why both with Polymer or Vue? What is the benefit they give you?

Promise I'm not giving you a hard time. I'm honestly curious if there's something I'm overlooking.


I asked the same question last year and to be honest, building a SPA is tough with just jQuery. It's more of my needs changing, I don't think SPA can be ignored in 2017, the progressive web app and AMP will put a huge dent in the native apps space.

I just like to think that I'm developing a mobile app with front-end javascript framework....it's just the tooling and prerequisite knowledge is quite chaotic. Finding the right articles (up to date) is half the battle, as it's scattered in endless git repo pages.


Run `( find -E ./ -regex '.*\.jsx?' -print0 | xargs -0 cat ) | wc -l` inside your `node_modules/` folder. It counts lines of code in JS files. Prepare to be sad.


I'm not sure why an SPA with jQuery is tough. $.ajax. Send data. Do stuff in a back-end. Return data. Update divs. If it is tough, you might be trying too hard. I'm not trying to be glib... it just sounds like you might be buying into the over-complexity that the original question was talking about.


it starts to get to be a problem when approaching 1k LOC with multiple developers. The benefit is separating the UI layer from the data layer. I didn't think much of this but the complexity is very real and there's a number attached to this slow & fragile & tightly coupled jQuery app.

If you are using jQuery to build a big SPA with multiple devs, you can certainly do it, but it's not enough. At the same time I'm against over engineered frameworks that imposes high cognitive friction on developers. Vue.js and Polymer hits the right notes for me. I feel that Vue.js is a reaction to React & Redux (seems like the trend is to use it when we don't need it). There will be use cases that makes one more favorable then the other but not many shops are employing 1000+ teams working on FB, therefore the premium you pay to mitigate complexity that you don't have results in loss of productivity, is how I like to think about it.

I won't use something because everyone else is. Often, the majority is wrong and easily influenced by marketing and authority (especially if you are good at lighting billions of dollars on fire to build an elusive monopoly). It happens all the time in dev communities but what makes it worse are the complete lack of insight into why you are not FB or Google. We blindly emulate them hoping to be like them.

I say, approach the problem with a blank sheet of paper. Figure out and evaluate what works for you and your team while watching out for marketing myths and other disinformation from the unicorns.


The problem with jquery is that the state is stored in html elements. It makes it hard to debug, maintain and learn bigger applications. Things like accessing a hidden variable become DOM traversals rather than property accessors.


jQuery is just a framework, if even that. If you choose to use it to store state in html elements, that is your choice. And yes, a common one. But the framework does not force it. If you make an AJAX call in jQuery, you get JSON back. (Or, I use it to get JSON back... you can send whatever you want back.) You can do whatever you want with that JSON.

Frequently, I do store metadata in a DOM element because the next event I will react to is a click on that DOM element, so I already have a handle on it from the ui element jQuery gives me... I do not have to traverse the DOM. But if the future use of the data is NOT going to be a response to a click on a specific DOM element, then no, I will do something else with the data.

Again, just because everyone else does it doesn't mean you have to, and doesn't mean it is inherent in the tools. I'm not saying jQuery is the best tool out there... I'm saying that complexity in an SPA doesn't come from jQuery itself, but from design choices made with it.


True, it would be better to say that it's not great to build an SPA with just jquery. It compliments a number of other frameworks that are good for SPA's.


My experience is with c# and knockout, which is pretty similar to rails and vue. I've found the sweet spot to avoid SPA's (which I'm pretty sure you can do with vue). Generate the html server side and only use javascript for the dynamic parts where it makes sense.


Absolutely! Unfortunately, this is very hard to sell to teammates on the SPA bandwagon and management who require all developers to be buzzword-compliant.


I think it would be a pretty tenuous thread that links low-interest capital to "Python in a Nutshell" exceeding 700 pages, but I'd like to see it expounded upon.

https://www.amazon.com/Python-Nutshell-Second-Alex-Martelli/...


1. I think this is rather obvious, work with what you have. Maybe think about hiring specifically for areas your team is in lacking in, as long as the team as a whole will see decent benefit from it.

2. I hate to say you're doing microservices "wrong" but I'd really question project structure and practices being the culprit behind the cost of doing devops with microservices.

3. This seems like an engineering fault, rather than some implicit principle behind those concepts causing more downtime.

4. How is CI a plaster on the problem of microservices? CI is useful with or without microservices.

5. Agile was always meant to be a guideline, not an end all and be all. It's meant to get your team to figure out how it wants to work, and write code before process. See: http://agilemanifesto.org/

The problems you are describing seem like big problems with your team, engineering and management. No amount of process and technology is ever going to fix a dysfunctional (sorry if that's too blunt) team. What I get from this, instead of having processes in place that make it easy to move code out, you're removing tooling to slow things down intentionally with the superficial result of "stabilizing" the entire development effort. The solution appears to be to get your team to write less code, and force management to bow down to the new reality of these "stabilizing" changes. Both of which can and sometimes should be done regardless of processes and tooling in place.

The best code is the code you don't write. But don't blame the tooling on making it easy for a team to be lazy and remove the all important characteristic of a team self-critiquing (i.e, "Do we really need this feature", "That'd be nice to have but right now we're managing to get things done.", "Did I actually test my code, was it reviewed, or am I just counting on the fact that I can shove something else out later while our redundancy systems pick up the slack?")


Quite a few of these issues are common in other orgs. "You're doing it wrong" isn't great advice :/


I would say in a lot of cases understanding that there are some basic failures is probably a great starting point to cleaning up the development effort. There's not much else I can say other than that, considering how vague OP's post is.

"Good" engineers will get things done and use common tooling to their advantage. This requires actually understanding the principle behind the tools, not just shoving things in and hoping it all magically works.

If you have a lot of "good" practices that are supposed to make it easy to move code around and you find that things just keep breaking, one could reasonably assume that it's simply highlighting an underlying issue. I'd start figuring out which engineers (and management) is causing more work for our organization than they're putting out.

What I think we're seeing from OP is lot of "in name only" practices.


Some good practices aren't a good fit for a particular organization. Moving the discussion to whether your engineers are "good", rather than whether they understand the organization's needs, is reductive.

If you want to develop better practices in an industry, saying the practitioner should be "good" isn't very helpful. Of course they should be good! But unfortunately, despite the trope, we can't all hire the best, and part of the reason we have best practices is to work well without only hiring the top 1% of engineers.

An example from my experience (mentioned in another comment)- microservices are a good practice in many larger orgs, because a big piece of what they solve is political- but the overhead of running a distributed system at a small org often isn't worth it.


I put "good" in quotes for a reason. I never said "hire the best"; that isn't a requirement for anything that was stated.

There shouldn't really be a measurable overhead of running a distributed system, at least in the context of microservices. I strongly disagree with the sentiment that a distributed system isn't "worth it" at smaller organizations. I'm part of one, and it helps keep things flexible while increasing reliability of the "overall" system(s).

But that's neither here nor there. One shoe size won't fit everyone, but OP ran down a gambit of things and seemed to have issue with each one. It is exceedingly unlikely they are doing anything eccentric enough to the point of proclaiming CI is just a bandaid on the broken concept of microservices. I will contend that the source of OP's insights are... misappointed, and by breaking down efficiencies and flexibility they're merely masking certain underlying problems.

What's more probable? An organization hired some wrong people, or a generic list of strongly supported practices over the course of two to three decades are to blame for an organization's failings? I guess that's my take on it.


No, but your rules don't resonate with me even though I feel the same overall.

1) Not the best language, but not the worst either. There's no excuse except microcontrollers for C these days (even though I still like it) and the fairly decent JVM can't excuse Java. I think people can come up to speed in a new language pretty easily. It's paradigms that are hard to learn, not syntax.

2) Sounds like you don't have devops. That's a solve-it-once sort of problem. And you have to solve it soon enough for some pieces so it shouldn't be put off. You need to be good at it.

3) It certainly can. It is increasing the size of your system considerably - not just the original system, but also the debugging rules for that system plus (as noted) the debugging rules for the debugging rules ad-infinitum. But what do you propose as a solution? Perfectly trained humans on-call? A procedures manual as detailed as the hypothetical code?

4) Well, lack of CI seems insane regardless of what sort of architecture you have. It's a symptom of not understanding the tools.

5) Capitalized anything is always bunk. But if I hear agile as meaning "short-term goals inside long-term goals, and continuous re-evaluation" then it makes perfect sense and has helped as a consultant and in industry.


I think these problems are not about software development, but are infrastructural and architectural. Lack of good people to handle those things is certainly a problem. But you do need quite a bit of infrastructure for microservices, for resilience, for continuous integration and all of that paired with some good architectural decisions. Resilience is probably the hardest thing among them, as it requires some expertise in distributed systems, operations, infrastructure, so you wouldn't do something, that has almost no impact, but requires a lot of engineering effort.


I don't do anything approaching microservices but a good CI setup combined with a good test suite is an absolute blessing that verges on a 'must have'.


> 1) Choose languages that developers are familiar with, not the best tool for the job

This is probably true but also the root cause I think. Enough developers aren't familiar with the right tools and abstractions (modularity, abstraction, purity, reproducability, etc.) that we just keep rehashing the same bad ideas in a never ending stream of new languages and frameworks that push the same decades old ideas.


Ultimately, though complexity is a real thing, the word is mostly used to mean "what I personally don't like".


Just to add to this a bit, what do you all think of the idea that "code is a code smell"?

In other words, if you're writing code, make sure you actually need to write it, and can't otherwise find someone else who's written/released/maintains it.


Yes, we are; no, it's not just you. Next question.


Horses for courses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: