Hacker News new | past | comments | ask | show | jobs | submit | nybblesio's comments login

I've been thinking about this topic a lot recently. My thoughts are still crystalizing but this is a rough snapshot:

What if there are actually two independent markets in play but they're masquerading as one?

Everyone is captivated by Big Tech (a slightly larger, more inclusive, subset which includes FAANG) pay. There's no doubt any engineer would love to be making $250k+/year. However, this market is selecting for something very different from the larger, common, market.

We need to ask why Big Tech pay is as high as it is. I don't believe it's driven by supply and demand in the classic sense. Instead, Big Tech sees its candidate pool as "free agents" [0]. This is an important distinction if you view these free agents as a source of potential competition, either individually or signed to another "team".

What if Big Tech pay is a form of greenmail? [1] Much like sports teams -- where pay is also extremely high -- the owners are aware that they're possibly overpaying for these people to sit on the bench. However, their risk analysis tells them that the cost is worth the small loss. [2]

This, to my mind, explains why Big Tech is focused on a very specific sliver of the engineering talent pool: graduates from top 10 schools. These people tick all the boxes: young, smart, fast, energetic, and unattached (typically). With this model in my mind, Big Tech interviewing practices make perfect sense. They aren't looking for "CRUD-a-day" programmers. They're looking for those few engineers who, even under immense pressure, still rise to the occasion and perform. This signals that they could be serious competition if left alone as a free agent. Note: I'm not saying that any one of these candidates will become competition. I'm sure the probability distribution is low but it isn't zero. Unlike other fields -- where barrier to entry is extremely high -- one of these hot shots might -- once every 10 to 15 years -- pull a miracle out of the ether and disrupt Big Tech to the point of destroying them. [3]

The other market is everyone else who uses and requires software to run their business, but software isn't their business. Unfortunately, this market -- if they could get it -- would just as soon buy something off the shelf instead of hiring software engineers. It's only because such COTS doesn't yet exist that this market still requires software engineering talent.

However, unlike Big Tech, this market doesn't see engineers as competition (and, generally speaking, they aren't). This severely limits the leverage that engineers have when negotiating with these companies. Being blunt: this market is looking for factory workers who shut up, sit in the fishbowl, and do what they're told. The cheaper the better. [4]

It seems to me it benefits Big Tech to blur the differences between these markets. However, engineers need to wise up to the reality of the bifurcation and plan accordingly.

[0] https://www.collinsdictionary.com/us/dictionary/english/free...

[1] https://en.wikipedia.org/wiki/Greenmail

[2] Small loss is relative, of course. Individuals have a difficult time understanding how this math makes sense because a) the numbers are far larger than they're used to seeing and b) they can't see all of the other numbers at play which balance out the strategy.

[3] https://youtu.be/oD65g2RFSHI?t=582

[4] Yes, not every company in this market is like this. However, it's extremely difficult to know this from the outside when you're trying to find a job. You can try to pry the information from them during the interview process but this isn't always successful. My experience is that "good companies" in this market are extraordinarily rare. Plus, there is always a risk that they'll be acquired and the new overlords hate engineers.


You hit the nail on the head. Big tech pays more as a form to supress competition. With high wages, it's easier for someone to lose motivation and not pursue their idea. which would increase to competition at the lower level. but as we know, the lower level competitors gather up, and not long after gather enough momentum to attack the big guys. the other way, big tech supresses innovation is by brain-washing or propaganda. look at all the companies formed by former faang engineers. majority of them go for vc-funding, use complicated tools at the onset. not because it's a need but because that's what they're used to. what would've been a simple frontend is now a monorepo monster, a simple backend a mesh of microservices. lastly, big tech supresses competition by open sourcing tools that are technically excellent but not needed by 90% of the companies out there. instead of companies innovating the companies are now chasing the platform i.e trying to keep up with the tools released by big tech. this is a microsoft playbook. and to ya other point, for the second sector - compensation can easily go up if engineers stop fucking around and start producing. if you're actually producing and the owner sees you as important, unless a foolish owner they will give you a piece of the pie.


<sarcasm heavy="true" comedy-intended="true" do-not-hate-me="true" smiley="true">

What do you mean!? Forethought!? That's BDUF! YAGNI! This is the problem with you damn tech weenies: all you want to do is fix things and think!

There is no business value to fixing this problem. Ship! Ship! Ship! Ship! We need to float the IPO or SPAC or...something so we can all get RICH! Who cares if the damn font system is broken!? I don't remember seeing a story card or epic for that in Jira.

Now, there has to be -- like -- hundreds of blockers in Jira you should be working on right now instead of futzing with the tech bullshit.

</sarcasm>

On a serious note: is it possible there's a reason why everything is just slightly shit?


Very effective tagging ;) I've trained actors for decades, and people understand character descriptions (the 'spec' in a commercial) more and more as essentially persona tags.


dBASE was a product of Ashton-Tate, not Borland. [1] Odd sidebar: Ashton-Tate had a BBS accessible via a toll-free (800) number for many years and it was extremely popular. I shudder at the thought of their monthly phone bill.

I built several systems using dBASE III and Clipper. Ah, the Summer of 87. [2]

I know I'm old and washed up when I wish I could time travel back to this era. I miss it.

[1] https://en.wikipedia.org/wiki/Ashton-Tate

[2] https://en.wikipedia.org/wiki/Clipper_(programming_language)


When a program could be written and be useable within hours.

Now it takes me days to install VM's and butchered/forked SDK's and cloud connectors and incant the various sacred undocumented mantras before I can actually open VSCode and start coding.


The cash price for this drug in the US is insane: https://www.goodrx.com/ozempic?dosage=2-pens-of-1mg&form=car...

I can only imagine the fight people will have with their insurance. One of the many reasons we can't have nice things.


Reading the wikipedia page on semaglutide, I was a little confused why one wouldn't just use the peptide it's an analogue of, glucagon-like peptide-1 (GLP-1). It states the only "differences are two amino acid substitutions at positions 8 and 34, where alanine and lysine are replaced by 2-aminoisobutyric acid and arginine respectively."

The article explains that the substitutions increase the half-life of the substance in the body, so I understand that as an advantage of the drug over the naturally occuring peptide. This would be important with an injection. But the skeptic in me wonders if, the drug now is available in an oral form, if the real reason for pushing the drug over GLP-1 is because the drug can be patented and GLP-1 cannot.


So the motivation was money. I am Jack's complete lack of surprise.


Wow, for that amount of money I could just hire someone full time to slap food out of my hand.


Emphasis "in the US".

Here are Swiss prices (German/French only): https://compendium.ch/search?q=Semaglutid

While injections are still quite expensive (~$130 per 3mg), pills are available for as little as $4 per 14mg dose (and judging from the dosage information, those pills are splittable).

I don't know the reasoning behind studying the effects of injections vs oral admission.


From the article:

>The international trial was funded by the pharmaceutical company Novo Nordisk.

From the Ozempic website:

> Cornerstones4Care®, NovoFine®, and Ozempic® are registered trademarks of Novo Nordisk A/S.

Follow the money.


Thats 2mg, the study was made with 2.4mg so that's at least that price weekly...


What does this actually say? The site is blocked to those of us outside of the US, probably because the website owners can't be bothered to comply with GDPR?


Short version: lowest price for a carton (2 pens) is >$800. I would guess that each pen is a single dose.


Not even... 2.4mg a week in the study, each pen is 1mg


My mother uses this drug by perscription. 1mg pen is for four weekly 0,25 mg doses (1 month).2 mg pen is again for four weekly 0.5mg doses.

Realy good results and only one aplication per week, which is great.

We are from Slovenia. Drug is free of charge with normal insurance.


Why would they? They should just go ahead and skip the cookies banners.

The EU cannot do anything against that, so why bother.

Blocking the site is something I do not understand


$950 and $900 with coupon.

It looks like for two doses.


From 1997 to 1999 I had a contract with Caterpillar. Dealers used an AS/400 application called Service Advisor via good old-fashioned 5250 terminals [0]. For those not born when reality was rendered in shades of green, a real 5250 terminal weighed close to 85 pounds (including keyboard) and was built to withstand armed invasion (only partially joking). They were rugged. All IBM hardware of that era was durable. Caterpillar dealers fix large earthmoving equipment and the users of this application were mechanics and other shop staff. Grease, oil, diesel fuel, and any other kind of grime you can imagine were slathered over these terminals. They still worked. No mouse. No GUI. As is the case with most "green screen" applications, users had a mental map of every panel and could "type ahead" several screens and would do so routinely.

It was decided that this application would be "modernized". Y2K hysteria was ramping up and the nascent dotcom boom was brewing. I lost count of how many times I heard, "We need to put some lipstick on this pig." The new hotness was Java and applets. Yes, you read that correctly: the new front-end was going to be written in Java and deployed as applets. The AS/400 would remain as the back-end. For a period of time DCE [1] plus C and C++ shims (running on the AS/400) glued the Java front-end to the back-end.

The project did get "completed" but in name only. Dealers hated the new interface: it was slow, it required a mouse, it did not support any kind of type-ahead, and PC equipment wasn't designed for such a brutal environment. My last interface with these systems was in in 2001 for a very short follow-up project. At that time the green screen was still ruling the roost. Perhaps they eventually did kill off the AS/400 and all of the COBOL. Nah, probably not.

The AS/400 wasn't then (and isn't now) sexy. Green screen applications aren't sexy. COBOL, RPG, and other IBM-centric technologies like CICS [2] aren't sexy. That doesn't mean they don't work. Ironically, they tend to work too well.

This is but one example from oh so many over the years. Sometimes, very rarely, an old technology truly requires complete replacement. Often, calling something legacy is used as cover to cargo cult and "keep up with the neighbors".

[0] https://en.wikipedia.org/wiki/IBM_5250

[1] https://en.wikipedia.org/wiki/DCE/RPC

[2] https://en.wikipedia.org/wiki/CICS

EDITED: formatting


The tech industry, but especially and most drastically open source, has been completely taken over by expert marketers called "dev relations" in the past decade. What you say about working too well is true; the business model of things like Kubernetes or new datastore du jour require a high touch to sustain the business model.


>> a real 5250 terminal weighed close to 85 pounds

I came into networking around the time when those were being replaced by 5250 terminal emulation in Windows through an application. So much desk space was freed up, now that you only had to have one 'computer' each desk.

Some customers would run new CAT5, and the others would just slap ethernet adapters on their twinax. Still run into a huge bundle of that stuff in a ceiling from time to time.


That brings back memories (not good ones). Those twinax baluns (and the equally evil token ring type 1 to ethernet baluns) were awful to keep running. I guess in the short run it was less expensive than running new cable, but in the long run my feeling is the TCO probably wasn't worth waiting to bite the bullet.


If you didn't have to work with RPG ("Report Program Generator"), count yourself fortunate.


This is super cool and I would pay Maya-level prices for it. For indie game development, tools like this are gold.

Although MonsterMash is more advanced, this reminded me of Martin Hash's Animation Master [1].

[1] https://www.hash.com/software-14-en


This is a good essay. However, the author didn't mention Pratt Parsing [1], which cleanly addresses [2] the shortcomings of recursive descent parsing. Parser generators look promising until you actually start building anything of moderate complexity. Then it's a constant cycle of kludges and hair pulling.

[1] https://en.wikipedia.org/wiki/Pratt_parser

[2] Mostly. The Pratt model does introduce potential context around operators (depending on the grammar). It's easy enough to extend the basic Pratt model to support this, but it isn't spelled out in many examples or literature.


> Parser generators look promising until you actually start building anything of moderate complexity

I've done both, by hand and with parser generators (flex/bison and antlr) and getting the machine to do the boring work is total fuckload[0] faster and more productive.

Edit: and unless you know what you're doing, you will screw up when hand-writing a parser. I know of a commercial reporting tool that couldn't reliably parse good input (their embedded language).

[0] 3.14159 shedloads in metric


What do you think is special about recursive descent parsing that makes you more likely to screw up unless you know what you're doing?

My experience has been the exact opposite - particularly as the language gets complicated and/or weird. In which case the generated parser becomes horribly brittle. Adding an innocent looking new rule to your working Antlr or Yacc grammar feels like fiddling with a ticking bomb - it might work straight away or it could explode in your face - leaving you with hours or days whack-a-moling grammar ambiguities.


I didn't say recursive descent parsing wrt screwing up, I just said "hand-writing a parser". Nothing about the flavour.

I guess our experiences differ but I don't know why. I have written a seriously major SQL parser in antlr and had no problem. And it was huge and complex, well that's TSQL for you.

It may be you have been parsing non-LR(1) grammars in bison which could prove a headache but... well IDK. Maybe I've been lucky.


That's an interesting coincidence - the biggest parser I wrote was also an SQL parser using Antlr. In fact, SQL was only part of it - it was a programming language that supported multiple flavours of embedded SQL (DB2 and Oracle). It worked but I always dreaded when it would have to be changed to support a new feature in a new release of Oracle (or DB/2).

I don't think it's an LR vs LL thing either. I feel that there is no sense of locality with parser generators; it's a bit like quantum mechanics - every rule has a "connection" to every other one. Change one seemingly small part of the Antlr grammar and some "far away" seemingly unrelated parts of the grammar can suddenly blow up as being ambiguous.


Coincidence indeed - I'm currently modifying my giant SQL grammar right now and building an AST off it. And struggling a bit with that, but that's mainly down to me not antlr.

It is strange that we're having such different experiences of it. I don't recognise your quantum view of it either, as a antlr rules, and bison, are very context-specific as they can only be triggered in the context of larger rules, and only when given piece of the target language (SQL here). They get triggered only in those very specific cases. I've never had your struggles with it. I don't understand.


I completely agree, yacc/bison are my goto tools - the big different i]s that you are building trees from the bottom up rather top down - if you're building an AST it probably doesn't matter, however a bunch of things (constant folding, type propagation) tend to go in the same direction so sometimes you can combine stuff.


Exercise for the reader: Write a simple arithmetic expression language parser, with numbers, variables, parentheses and working precedence levels. Extend it with "EXP0 EXP1 ... EXPn" syntax for function calls. Now extend it with "VAR1 ... VARn '.' EXP" syntax for lambdas. Yes, with no "fun", or "\", or "λ" to introduce it—you realise you have a function definition only when you arrive at the dot. The function definition spans as long as it makes sense.

Pretty fun, although requires some small hackery (I solved it by testing, at seeing the dot, that the "apply" list I've build to this point contains only variable names, and then re-using it as the "arglist").


I don't know why you'd want to write lambda like that.

Long ago, I made a prototype mixfix syntax for Scheme intended for a shell-like REPL with ISWIM overtones. The paren-less function call syntax (terminated by newline or semicolon, like shell) was done by modifying the Pratt engine to treat juxtaposition as application. Sexp syntax was also valid as parenthesized expressions.


Well, when I played with a minimal lambda-calculus evaluator, I quickly got tired of having to print "\" or whatever to denote the start of a lambda. But look, for example, at this:

    (f. (x. f (v. x x v)) (x. f (v. x x v)))
        fct. n.
            (church0? n)
            (_. church1)
             _. church* n (fct (church- n 1))
Are "\" really needed there? You have to put lots parens to denote where lambdas end anyway, and they also happen to show where they start too, and a dot is a clear enough signal for a human (IMHO) to see that it's a lambda. So that's how I've came up with the idea of writing lambdas like that. Then I extended my lambda-calculus evaluator with some arithmetic primitives, and yes, the syntax got somewhat peculiar.


I haven't made the Show HN post yet, but using the parser combinator library that I've been building[1], here's an answer to your exercise:

  module Joker_vDChallengeParser
    include Parsby::Combinators
    extend self

    def parse(s)
      (expr < eof).parse(s)
    end

    define_combinator(:exp_op) {|left, right| group(left, spaced(lit("^")), right) }

    define_combinator(:pos_op) {|subexpr| group(lit("+") < ws, subexpr) }
    define_combinator(:neg_op) {|subexpr| group(lit("-") < ws, subexpr) }

    define_combinator(:div_op) {|left, right| group(left, spaced(lit("/")), right) }
    define_combinator(:mul_op) {|left, right| group(left, spaced(lit("*")), right) }

    define_combinator(:add_op) {|left, right| group(left, spaced(lit("+")), right) }
    define_combinator(:sub_op) {|left, right| group(left, spaced(lit("-")), right) }

    define_combinator(:call_op) {|left, right| group(left, ws_1, right) }

    define_combinator :identifier do
      first_char = char_in([*('a'..'z'), *('A'..'Z'), '_'].join)
      rest_char = first_char | char_in([*('0'..'9')].join)
      first_char + join(many(rest_char))
    end

    define_combinator :lambda_expr do
      group(
        sep_by_1(ws_1, identifier) < spaced(lit(".")),
        expr,
      )
    end

    def scope(x, &b)
      b.call x
    end

    define_combinator :expr do
      lazy do
        e = choice(
          decimal,
          lambda_expr,
          identifier,
          between(lit("("), lit(")"), expr),
        )

        # Each e = scope ... block is a precedence level. You can switch
        # them around to play with the precedence of the operators.
        #
        # hpe -- higher precedence level expression
        # spe -- same precedence level expression

        # Basing myself on Haskell and making function calls the highest
        # precedence operators.
        e = scope e do |hpe|
          reduce hpe do |left_result|
            choice(
              call_op(pure(left_result), hpe),
            )
          end
        end

        # Our only right-associative precedence level.
        e = scope e do |hpe|
          recursive do |spe|
            choice(
              exp_op(hpe, spe),
              hpe,
            )
          end
        end

        e = scope e do |hpe|
          recursive do |spe|
            choice(
              neg_op(spe),
              pos_op(spe),
              hpe,
            )
          end
        end

        # Left-associativity done by parsing left operand bottom-up and
        # right operands top-down.
        e = scope e do |hpe|
          reduce hpe do |left_result|
            choice(
              mul_op(pure(left_result), hpe),
              div_op(pure(left_result), hpe),
            )
          end
        end

        e = scope e do |hpe|
          reduce hpe do |left_result|
            choice(
              add_op(pure(left_result), hpe),
              sub_op(pure(left_result), hpe),
            )
          end
        end
      end
    end
  end

That was a nice exercise. Here's some example parses:

  pry(main)> Joker_vDChallengeParser.parse "- 3 - foo bar . foo - bar - 5"                                     
  => [["-", 3], "-", [["foo", "bar"], [["foo", "-", "bar"], "-", 5]]]
  pry(main)> Joker_vDChallengeParser.parse "- 3 - (foo bar . foo - bar - 5) - 4 * -2 + 5 / 2"                 
  => [[[["-", 3], "-", [["foo", "bar"], [["foo", "-", "bar"], "-", 5]]], "-", [4, "*", ["-", 2]]], "+", [5, "/", 2]]
  pry(main)> Joker_vDChallengeParser.parse "- 3 - (foo bar . foo 5 6 - bar - 5) - 4 * -2 + 5 / 2"             
  => [[[["-", 3], "-", [["foo", "bar"], [[[["foo", " ", 5], " ", 6], "-", "bar"], "-", 5]]], "-", [4, "*", ["-", 2]]], "+", [5, "/", 2]]

  pry(main)> Joker_vDChallengeParser.parse "foo bar . foo bar baz qux . baz qux"                              
  => [["foo", "bar"], [["foo", "bar", "baz", "qux"], ["baz", " ", "qux"]]]
  pry(main)> Joker_vDChallengeParser.parse "foo bar . foo bar + baz qux . baz qux"                            
  => [["foo", "bar"], [["foo", " ", "bar"], "+", [["baz", "qux"], ["baz", " ", "qux"]]]]

You can try it by cloning my repo, running `./bin/console`, and pasting the module, or putting it in a file in `lib/`. If you put it in a file, you can reload changes with `reload!` in the console.

[1] https://github.com/jolmg/parsby

EDIT: Fixed the syntax for newer versions of Ruby.


Looking back at it, I forgot to allow spacing between parentheses and inner expression

  between(lit("("), lit(")"), spaced(expr)),
and using space as an operator prevents `(foo)(bar)` from being parsed as a call.

This is better, just adjacent expressions with optional in-between whitespace:

  define_combinator(:call_op) {|left, right| group(left < ws, right) }
Too late to edit, though. Oh well.


Also no mention of combinator parsing.

https://en.wikipedia.org/wiki/Parser_combinator


They're mentioned in footnote 1 as a flavor of recursive descent parsing.


I feel the Story of Mel is appropriate for this thread. And to quote the legend himself, "If a program can't rewrite its own code, what good is it?"

http://www.cs.utah.edu/~elb/folklore/mel.html


It's a shame Kent used the words "test" and "development". Test Driven Design would have been better, but people would still misinterpret what is under "test". Yes, there's a side effect of asserting behavior in Kent's vision of TDD but it's a happy accident.

What's under test is the design. Way before TDD was a thing, when I worked at IBM, we used to call this "inverted design": write the calling code first to see what the API might look like and then make it work. In the late 80s it would have been considered a massive waste to assert behavior though; we'd just implement it.

Automated functional tests (from the outside in) are where the bulk of does-it-do-what-it-says-on-the-tin testing should happen.


> Way before TDD was a thing, when I worked at IBM, we used to call this "inverted design": write the calling code first to see what the API might look like and then make it work.

I really like the idea of this, and I very occasionally have the foresight and wherewithal to do this kind of "top-down" programming.

Maybe not surprisingly, this is how I sometimes end up with the much-criticized "Interface with only one implementer" design smell. I write the interfaces that I would like, right next to where I'm writing the calling code. The interface(s) evolve as the calling code is unfolding. Then, later, I make an impl for the Interface.

At that point I could just delete the interface and only use the concrete implementation, but... I don't. shrug.


I don't think so. If I didn't know who Ron Jeffries was, I would have sworn the following series was pure satire about TDD:

    https://ronjeffries.com/xprog/articles/oksudoku/
    https://ronjeffries.com/xprog/articles/sudoku2/
    https://ronjeffries.com/xprog/articles/sudokumusings/
    https://ronjeffries.com/xprog/articles/sudoku4/
    https://ronjeffries.com/xprog/articles/sudoku5/


Preface:

Once upon a time, I worked for a company who rents movies on DVD via kiosks. When I joined the team, pricing was hard coded everywhere as a one (1), because YAGNI. The code was not well factored, because iterations and velocity. The UI was poorly constructed via WinForms and the driver code for the custom robotics were housed inside of a black box with a Visual Basic 6 COM component fronting it. It was a TDD shop, and the tests had ossified the code base to the extent that even simple changes were slow and painful.

As always happens, the business, wanted more. Different price points! (OMG, you mean it won't always be a one (1)!!?) New products (OMG, you mean it won't always just be movies on DVD??!) And there were field operational challenges. The folks who stocked and maintained the machines sometimes had to wait for the hardware if it was performing certain kinds of maintenance tasks (customers too). Ideally, the machine would be able to switch between tasks at a hardware level "on the fly". Oh, and they wanted everything produced faster.

I managed to transform this mess. Technically, I would say it was (mostly) a success. Culturally and politically it was a nightmare. I suffered severe burnout afterwards. The lesson I learned is that doing things "right" often has an extremely high price to be paid, which is why it almost never happens.

On "over-engineering":

I find this trend fascinating, because I do not believe it to be an inherent issue. Rather, what has happened, is that "engineering" has moved ever closer to "the business", to the point of being embedded within it. What I mean by "embedding" here is structurally and culturally. [Aa]gile was the spark that started this madness.

Why does this matter? Engineering culture is distinct and there are lessons learned within we ought not ignore. However, when a group of engineers is subsumed into a business unit, their ability to operate as engineers with an engineering culture becomes vastly more difficult.

The primary lesson I feel we're losing in this madness is the distinction between capability enablement and the application of said abilities.

Think about hardware engineering: I do not necessarily know all of the ways you -- as the software engineer -- will apply the abilities I expose via my hardware. Look at the amazing things people have discovered about the Commodore 64 years after the hardware ceased production. Now, as Bob Ross would say, "Those are Happy Accidents." However, if I'm designing an IC, I need to think in terms of the abilities I expose as fundamental building blocks for the next layer up. Some of those abilities may never be used or rarely used, but it would be short sighted to not include them at all. I'm going to miss things, that's a given. My goal is to cover enough of the operational space of my component so it has a meaningful lifespan; not just one week. (N.B. This in no way implies I believe hardware engineers always produce good components. However, the mindset in play is the important take away.)

Obviously, the velocity of change of an IC is low because physics and economics. This leads everyone to assume that all software should be the opposite, but that's a flawed understanding. What happens today is we take C#, Java, Python, Ruby, etc. and start implementing business functionality at that level. To stretch my above hardware analogy, this is like we're taking a stock CPU/MCU off the shelf and writing the business functionality in assembly -- each and every time. Wait! What happened to all that stuff you learned in your CS undergrad!? Why not apply it?

The first thing to notice is that the "business requirements" are extremely volatile. Therefore, there must be a part of the system designed around the nature of that change delta. That part of the system will be at the highest, most abstract, level. Between, say the Java code, and that highest level, will be the "enablement layers" in service of that high velocity layer.

Next, notice how a hardware vendor doesn't care what you've built on top of their IC component? Your code, your problem. Those high-delta business requirements should be decoupled from software engineers. Give the business the tools they need to solve their own problems. This is going to be different for each business problem, but the pattern is always the same. The outcome of this design is that the Java/C#/whatever code now has a much lower change velocity and the requirements of it are future enablement in service of the tools and abstraction layer you've built for the business. Now they can have one week death march iterations all they want: changing colors, A/B testing, moving UI components around for no reason...whatever.

There are real-life examples of this pattern: Unity, Unreal Engine Blueprints, SAP, Salesforce. The point here isn't about the specifics of any one of these. Yes, a system like Blueprints has limits, but it's still impressive. We can argue that Unity is a crappy tool (poor implementation) but that doesn't invalidate the pattern. SAP suffers from age but the pattern is solid. The realization here is that the tool(s) for your business can be tailored and optimized for their specific use case.

Final thoughts

Never underestimate that the C3 project (where Extreme Programming was born) was written in Smalltalk, with a Gemstone database (persistent Smalltalk). One of the amazing traits of Smalltalk is that the entire environment itself is written in Smalltalk. Producing a system like I describe above, in Smalltalk, is so trivial one would not notice it. Unfortunately, most business applications are not written in environments nearly as flexible so the pattern is obscured. I've held the opinion for a long time that XP "worked" because of the skills of the individual team members and the unique development environment in use.

As I stated at the beginning, this path is fraught with heartache and dragons for human reasons.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: