Hacker News new | past | comments | ask | show | jobs | submit login
Responsible Monkeypatching in Ruby (appsignal.com)
55 points by thunderbong on Aug 27, 2021 | hide | past | favorite | 55 comments



I wonder why this otherwise thorough post doesn't mention almost decade-old feature called "refinements" [1]?

1: https://ruby-doc.org/core-2.5.3/doc/syntax/refinements_rdoc....


I don't know the authors reason but I've personally always avoided refinements after first reading about their performance hit for JRuby code[0].

I read somewhere recently that even for MRI ruby you can take a 40% hit when using refinements (but please take that with a big pinch of salt as I can't find the source right now).

0. http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/...


The source is me: https://gist.github.com/casperisfine/1c46f05cccfa945cd156f44...

Note that this overhead apply as soon as a refinement is defined for a method regardless of wether it's ever active.

That 40% figure is for an empty method. So for "big" methods that are infrequently called it's probably fine, but it should really be avoided in hotspots.


Aha! Excellent, thank you.

For me refinements are a bit like meta-programming; I'm glad they exist but I think there's other ways to get the same result which are easier to live with.


Not sure why the author chose not to, but personally I’ve avoided use of refinements because of the weird, stateful way that it changes the implementation of a class. It’s too confusing to scan the code quickly and know whether a particular class has a refinement activated in a particular scope. Ideally don’t monkey patch things, wrap them in your own class that decorates a stdlib class. If you can’t do that, my personal preference is to have the monkey patched class be consistently monkey patched on all scopes.


I use refinements in a couple of my projects, they definitely feel like the right approach, especially in a library where you don't want unintended consequences.

Admittedly they're a bit of a mystery to lots of devs.


Absolutely, refinements are the most legit way of monkeypatching. I could almost bet the author doesnt know about it, had he known there is no way anyone would forego mentioning it when it was specifically introduced to address the title of the post.


There is no such thing as "responsible monkeypatching". Even if it's fixing a bug in the source code or annotating, as the author suggests are practical uses.

Writing good code is about clarity and working within maintained expectations. People reading your code should be able to know what it does. If someone joins your team and knows that Method X in library Y is bugged, they will expect it to be bugged in your codebase as well, and reasonably so. The fact that you monkeypatched it is not something that they will be aware of, and can potentially cause bugs in their code as a consequence, costing them time and patience.

A better approach is to use a wrapper instead of monkeypatching, or inherit from the bugged class and overwrite the bugged method. You get the same results, but anyone reading your code, with or without prior knowledge of how your code works, will:

1. Have immediate understanding that you've altered the source code in some way

2. Have a crumb trail leading them back to your custom implementation


Exactly, when the article used their example of bug found in production, I knew what would happen before the reveal, because it's what always happen.

Damn it, you got a language with namespaces, why do you think that is ?


Can someone help me understand why anyone would ever rely on monkeypatching in non-test code?

Python is just as dynamic as Ruby in a lot of ways. But in Python, overwriting an existing and important/fundamental/ubiquitous method would be an absolute last resort when all other sensible approaches have failed. And in that case you would at least have the sense to save the original method somewhere under a different name.

What makes such a horrifying idea so apparently not-horrifying in Ruby? The ability to "reopen" class?

None of the examples in the blog post seemed like such dire circumstances as to require it. Maybe replacing a buggy method while you wait for a PR and new release is a valid use case. But the failure mode is also particularly nasty.


At one startup in 2007, I built a Yodlee style automation framework on top of Ruby and the web automation gem Watir. It ran across most of the big banks in the US, whose servers failed frequently with one off errors that would be resolved with a refresh. As I built my framework atop of the Watir gem, the easiest way to global fix this issue (and others) was to add error catching and retry methods upon server errors using this monkey patching technique. I wrote this as its own class and loaded it as a sidecar to the Watir gem. This way our custom implementation would enhance Watir across all our deployments while allowing us to continue upgrading the Watir gem in the future. With this specific enhancement of retry upon server errors, we were able to have robust error handling for our application globally that was specific to our use case without having to add template code error in each bot written to crawl each of the US banks which would have been required for each page load and click that required a server response to proceed. Each bank bot would take a few hours to write and was reliable.


I'm entirely unfamiliar with Ruby so although this may seem like a swipe at your language choice it isn't, I am confused about the banks not wanting to use CICS and extend to the web using something like Liberty server profiles that launch java threads that understand CICS and therefore any custom error codes specific to the transaction system. I'm guessing that you needed to catch errors that weren't native to your code and unique to the respective bank clients. I'm further guessing that because CICS provides a comprehensive custom error handling protocol and services, and CICS is everywhere in financial transactions and nobody wants to be responsible for the fate of any lost ts error messages, this is why you could patch to handle the errors in this way without any problems with your code?

Since IBM went all services and consulting and practically only mainframe remained of the old IBM, and whilst x64 Etal went into architectural Cambrian Explosion, at the turn of the century, IBM Z division has become a entirely different animal. The just discussed at Hot Chips 33 Telum processor with CISC instructions for memory architecture networking and in pipeline RNNs is so out there that I have started to try and figure out how much it takes to get to be a IBM partner just to get access enough to write about this technology freak. If the team exists the finance is slam dunk link with a infrastructure fund. This mainframe AI can be trained with regular tools and functions with really low latency - it's on the die. Just don't be ageist and train vets with clearances (for LOB or just because it's the right thing to do - my partner founded a headhunter firm for veterans and placed the vast majority in IBs after 6 months of his programme - and age no issue to CEO signed undertakings, my partner really hit it out the park with his programme quality, quant covered from fp math to martingales but it had to shut down because the people guy who kept everything from rabbit holing to fulfil my partners fantasy hedge fund ambitions died of cancer and a tech partner is necessary to free me to pick up the relationships which are as open as the day my partner kicked those doors off their hinges with some outrageous leverage of 80s mainframe knowledge that pumped the flow desk of the financial infrastructure organisation who's CEO didn't blink when my partner made the FT trampling over every unwritten rule of headhunting that he never knew existed like everything else beyond his screens. This guy has just shy of 50 years success behaving like this what am I to know? I am still recovering from the unending cardiac arrests I had for no reason until I realised that some relationships really do transcend everything. The UK incidentally has a enormous pool of senior management talent created by the government firing all civil servants who qualified vocationally instead of university. Everybody was summarily dismissed. Just before the 1997 employment act was introduced. I have worked with the director of the largest aide agency in budgetary terms after the Gates foundation whose office was a dozen or so people. It now needs 600 apparently. The logic was partner agencies didn't manage fraud well enough for the British government liking. Plenty better than we do without any relationships with the same leaky institutions that now take pride in embarrassing the Brits with implicit sentimental support from the agencies we cut off.

My point is not a self promotion but the promotion of a architecture hardly anyone gets near enough to speak about for comparison with the front line of web development. This is a unmitigated loss for everyone all around.

All that boastfull background is intended to directly criticise the nature of this technology discussed here that's controversial enough to elicit numerous warnings in the comments immediately after publication.

Given the search rank of HN and the difficulty with finding human readable (non expert) commentary on highly technical issues like monkeypatching I don't see how the existence of this discussion doesn't de facto eliminate the techniques and even potentially the technology as a serious option for critical systems deployment.

Consider the weight a browsing executive will place on what's said here.

In the example just given the application is applying node specific handling changes to in progress production code in a webserver talking to banks running transaction systems.

Consistency of the www service and presumably the user interface critical to customers and likely to be the only way to access account information (giving up our company x.500 terminal probably not even after corporate death because companies have been bought from administration only for grandfathered services often enough I considered trading them for a living) despite how much people will pay for the capability.

Monkeypatching tells me that the errors aren't in your code because why introduce the trade-offs and risks of the technique for blowing up all of your most impressive customer references at the same time? I'm guessing hard nothing much has to be done to the error messages.

But it's so hard to blow up CICS (assuming you read the docs) potentially even lacking in-depth mainframe knowledge the opportunity to launch lightweight java environments and FFI to your Ruby runtime or make the call via CICS from a SPI error manager in javascript that because CICS you know is giving you actual state, or creating a message feed for individual subscription to have a extra confirmation that you didn't lose their messages, and running this under the banks sysadmins purview whilst giving you the same certainty your updates applied with point in time rollback to exact system clock and transaction stamp and in flight transaction state and zero sweat about any potential consequences looks pretty attractive to where I am seeing this.

If I have totally misunderstood this I'd like to beg a point for the scenario that I have described : namely one where you are responsible for the ephemeral bleeding edge of web UX and (in my hackneyed eyes) making a stateless best effort ad hoc text delivery protocol statefull reliable and infinitely scalable also a GUI thread manager and connections manager and anything else I omitted to boot, behind which is a great hulking mainframe nobody wants anyone to touch because the lack of adoption can't overcome human fear of being responsible for borking a IBM Z no matter how indoctrinated everyone is that Z can't fault assumptions of complacency obliterate reason and so you're hoodwinked (joined that club long ago) into not looking up how you can run webserver tech on CICS keeping it on the mainframe and getting the benefit of all that reliability and fault tolerance and data processing invincibility for your own personal gain.

Putting the code on CICS is slam dunk argument because how can anyone beat the mainframe management magnificence mantra unless there's some major issues with the big iron priesthood. Your code automatically gets the same management and reliability capabilities and gives sysops single pane of glass sight of everything happening with the front end interactions opportunity to get in on the new wave potentially for them personally and a nice brag of relevance in the face of the bleeding edge and unstructured world we represent, meanwhile the web development team just bagged major C Suite kudos and the ability to coopt some of that incredibly valuable institutional invincibility inventive that can transform your business model from job quotation to Senior Web Scale Z/OS Customer Systems Integrity Consultant times head count and paid pitch fees, retainer and expenses. Plus probably the possibility of getting insurance cover for DR scenarios and hourly overtime on any incidents paid for by customer DR budgets and policies instead of being the wrong side of pointing fingers and the one thing that C Suite types sympathise with is web UX and general web reliability grief.

I surely deserve criticism about my style with this, but what's going on with the mainframe about now is really wild. My pitches above were hypothetical to illustrate the point although my partner and good buddy is as much work as that non fiction description and a leading dinosaur in my eyes. I absolutely accept that the chances of mainframe ubiquity are not meaningful numbers. But with everyone busy making the OS as irrelevant as possible, how long does it take for the mainframe Telum cpu to be viable mainstream silicon? Very strangely I think I am going to live to see that move being made. Maybe the IBM strategy is the right one and only the attitude of management the problem.


I'm in the middle of writing some of this right now.

For 'reasons', I need to add callback hooks into a particular HTTP Client library my org uses across several dozen of projects. The library we commonly use doesn't have hooks.

I've got a couple of options: switch wholesale to a different library, build a child class, build a nasty bit of monkey patching boiler plate, or build a little library which prepends some logic into the ancestors chain.

The first and second would cause eye-rolls ("I'm just calling an internal api in the way I've always done, why are you making me change it everywhere!"). The monkey patching boilerplate, well, gross. But a little bit of prepend logic packaged into a gem and I've got the behaviour I need.

One of the tremendous joys I get from ruby is never having the feeling that the language or runtime can't do something I need. Whether that thing should be done or which way it should be done is another matter.


As you can see from below, it's cultural. It's also the reason why every Rails codebase turns really bad as it grows: metaprogramming keeps being used extensively instead of being limited. Modules abused, stuff monkey patched, methods creating other methods continuously.

The consequences are always: code unmaintainable over the long term.

Unfortunately to reach this disillusioned state you need to hit some really hard walls, which doesn't happen to all ruby devs.


That's a matter of developer discipline and foresight. I've be managing a Rails project since late 2011 (that's my longest run so far), usually adding a few features per year and fixing bugs, mostly alone or with another developer. If I put too much magic into the code I'd spend much of my time figuring out what I did months ago. Instead I can usually read my code from 2011 / 2012 and understand what it does. The puzzling exceptions are not so exceptional: I'm having the same experiences with the other code bases I'm working on (Python, Elixir.)

So in doubt no metaprogramming. Example: I never wrote a macro in a real world Elixir project.


I can't say anything about the number of featurea, that's entirely business driven, but of course you shouldn't be writing macros in Elixir on a standard application.

But it's not abnormal for a rails codebase to have a Concern (rails module) that provides a class method that does something, specific to the application. Which is equivalent to a macro.


I have seen a couple of Rails codebases and "metaprogramming keeps being used extensively instead of being limited" was not the problem in any of them. Instead it was mostly picking bad dependencies, rushing to delivery, layering complex features on top of one another. The code was maintainable, it is the business logic (and the decisions made hastily) what made it unmaintainable.

> Unfortunately to reach this disillusioned state you need to hit some really hard walls

I have seen quite a few Rails apps, some pretty old, and these were not the hard walls which were hit. One way of not hitting those specific walls (which are pretty soft IMO) is hiring decent people, decently compensated. And empowering them to make decisions.


Notice that we might have a different definition of metaprogramming. Including a module, to me, is part of metaprogramming.

A lot of codebases do that, and they create chain of undeclared dependencies between multiple modules being included that make impossible to determine the requirements to use such module in the first place.

The development tend also to slow down to a cripple, as well as being overstaffed to compensate.


The best example I can think of is Rails' ActiveSupport. The changes it brought to Ruby aren't fundamental in any way but they are convenient. For example asking whether a variable is present? (truthy, so empty str and empty [] should return false) is not possible with plain Ruby but is possible in Rails. It's stuff web developers need and makes things simpler. This is done by monkeypatching. Another example is if you're working on a library that has some bug but you can't quite yet jump to the next version.


My favorite ActiveSupport monkey patching is on number classes. Things like 2.weeks.from_now are just too useful to scorn monkey patching on some programming ideological reasons.


I wish every language had open classes for this reason. It makes ruby such a joy to write in. Something like Kotlin's class extensions might be a happier middle ground for many languages though, since that way you only get what you pulled in, in any given file.

Yes, of course you can do horrific things with it. But the ruby community has largely been absolutely stellar about not doing that - most keep insanities tightly bounded, where they can be used to make pleasant and error-resistant APIs. And even when they're not (which is usually self-inflicted by internal code), the language is flexible enough to let you unfuck just about any fuckery that you run across, if needed.


I'd argue the opposite. Most ruby codebases are infected with monkey patches, extracting libraries now depends on activesupport being used, forcing the dependency


Activesupport is hugely widespread, yeah. It's a bit unfortunate since it provides so much, and it's rare that all (or even most) of it is used.

But beyond that though, the vast majority of issues I've seen with monkey patching has been stuff that was created in the project/company that's experiencing the problems. Libraries are generally very good about how they monkeypatch (because doing it wrong with hundreds of using-projects VERY quickly runs into problems), but those ad-hoc internal monkeypatches are routinely done by people who don't fully understand what they're doing, or take shortcuts. Those can have latent bugs linger for a very long time, and yeah - they can be nasty to unravel.


> Activesupport is hugely widespread, yeah. It's a bit unfortunate since it provides so much

Even this is becoming less and less necessary as more of the commonly-used syntactic sugar gets pulled into Ruby core.


Open classes are against SOLID principles (O stands for - classes should be extendable, but not modifiable). So, a big no in some circuits. Aside from open classes, ruby also have instance_eval and class_eval taking to violating that principle to whole new level.


Funny thing, that construction is actually pretty easy to achieve in buttoned-up and statically typed C# with extension methods:

        static TimeSpan Weeks(this System.Int32 n)
        {
            return new TimeSpan(n*7, 0, 0, 0);
        }

        static DateTime FromNow(this TimeSpan dt)
        {
            return DateTime.Now.Add(dt);
        }

        static void Main(string[] args)
        {
            Console.WriteLine(2.Weeks().FromNow());
        }


Not bad! I also feel like it’s worth mentioning that in rust you can extend primitive types with traits. I’ve done that to add some bit-munging helpers while working on a toy Game Boy emulator to make it easier to deal with 16- and 8-bit register values like (8, 8).join() and (l, h) = 16.split().


I'm impressed actually.


Oh god no. If you want to parse strings, parse strings. I avoid Ruby like the plague (after about 2 years where it was my primary language) because of this kind of clever magic spaghetti that subtly breaks in all sorts of cases.


That's Rails not Ruby actually. And it never bothered me once but to each his own...


Weeks(2).from_now

I don't understand how monkey patching is beneficial beyond personal preference.


The entire Ruby ethos and surrounding APIs are basically all a result of a particular set of personal preferences. One of the core values of Ruby is programmer happiness. That’s obviously qualitative and varies from person to person. That said, this kind of pattern is extremely _Ruby_ and if you like it, you like it. I love writing Ruby code because of this kind of attention to making code almost like prose. However, I’ve also worked on a lot of terrible Ruby projects that were a mess of metaprogramming and clever dynamic language abuse, so it’s hard to mount a rigorous defense.


My first experience with Ruby was writing rspec tests and chef recipes, and I can't say I was very happy when I had to debug that code. It's given me a deep aversion towards any Ruby DSL that still persist. I will work with Ruby if I have to, but I am not happy about it.

The way Ruby does metaprogramming makes debugging extremely frustrating because your stacktraces will contain references to methods that exist nowhere. Good luck then trying to find where they're defined so you can understand how they work.

I have a very strong opinion that generating methods at runtime is a terrible feature that should be used only when nothing else can be done.


There are languages which can automatically transform between a.foo(b) and foo(a, b) - for example the D language. Does this make 2.weeks.from_now more palatable? And if it does, then why worry about whether the code “goes on the class” or not? During execution it doesn’t really matter where the code lives, and during development open classes are doing basically the same thing as defining functions overloaded on the first argument.


That looks like you're having to make a Weeks class just so you can do the same thing, instead of adding it to Integer and having it available on any int.


In Ruby at least this example would just be an instance method named Weeks on the kernel class and it wouldn’t really be any more clunky to implement than the ActiveSupport flavor. However because Ruby you’re still reopening a class. You’re just adding a method to Kernel instead of Integer.


That would just be a global method in ruby.

At the top of any file:

def Weeks(arg) end


It could have been done with various less invasive ways. E. G. Clock(2).weeks.from_now (I'm making this up on the fly, there are definitely better names). This wouldn't make your number 2 respond to "weeks" all of a sudden and the cost would be 7 additional characters.

That's negligible.


What's wrong with 2.weeks.from_now ? I get that it hurts your sensibilities (SOLID etc), truth is it's fine. I've seen a good share of Rails projects and never had a problem with this yet. We tend to have a knee-jerk reaction to certain things, and sometimes for good reason, but ActiveSupport is used in millions of Rails projects and nothing horrible happens.


There are things that go wrong when using ActiveSupport, but it's so ingrained in people's workflow that they won't notice.

The one I consider the most annoying: it substantially increases load time. ActiveSupport is huge and just requiring it increases the load time by 1.x seconds. Given what it provides, that's substantial.

The inability to extract a library (or a service) without depending on ActiveSupport is a cost that doesn't exist with the other approaches.

Not to underestimate, your _number_ now responds to `weeks`, and duck typing is the only way you have "interfaces" in Ruby. I can easily picture having a "weeks" or days attribute on some entity and a service object expecting that, now there are subtle bugs when passing accidentally the entity's _value_ instead of the entity itself.

Yeah, they are "minor" things, but the point is dying of thousands cuts.

The last damage is more subtle: devs, even less experienced devs, will feel authorized to monkey patch. In the private codebases I saw (multiple) this is common. It's not a last resort anymore and it ends up becoming a hard problem to debug and resolve.

The ability to monkeypatche pushes devs to NOT design their objects in an extensible fashion. This effectively is the highest damaging consequence, but it's incredibly hard to assess.


> The inability to extract a library (or a service) without depending on ActiveSupport is a cost that doesn't exist with the other approaches.

I'm not sure what you mean; Rails depends on ActiveSupport so if you're a Rails codebase you don't really care, you already depend on it. If you're a gem author it's your own decision if you want to depend on ActiveSupport or not, I'm sure many gems don't depend on it, and others do.

> Not to underestimate, your _number_ now responds to `weeks`, and duck typing is the only way you have "interfaces" in Ruby. I can easily picture having a "weeks" or days attribute on some entity and a service object expecting that, now there are subtle bugs when passing accidentally the entity's _value_ instead of the entity itself.

Never seen this. Not saying it can't happen (can u give an example though? not 100% sure what you mean) but I've never seen it.

> Yeah, they are "minor" things, but the point is dying of thousands cuts.

This is a bit of a hyperbole statement wouldn't you agree?

> The last damage is more subtle: devs, even less experienced devs, will feel authorized to monkey patch.

I think it happens way less than you think; usually in teams of inexperienced people who discover Ruby and metaprogramming for the first time. But the worst monkeypatching bug I've ever seen was actually done in javascript; a guy monkeypatched some jquery method and broke 20000 websites at once (we were a widget startup). Monkeypatching isn't a Ruby thing it's a dynamic languages thing. I don't see Ruby/Rails culture just encouraging people to monkey patch String or Object. The fact dhh and a very experienced team of devs did it, and tested it and perfected it for years, doesn't mean you should do it. This is pretty common knowledge now and I think 99% of Rubyists would agree.


I wish it was true, still, when I look at authentication done with devise, there is extensive monkeypatching done (probably caused by poor design in Devise itself, but still).

> This is a bit of a hyperbole statement wouldn't you agree?

I'm not sure, I see many small problems that slowly leads to what the Ruby community is now. That being said, if you look at how many things ActiveSupport monkeypatches, "thousands cuts" it's not an understatement: https://github.com/rails/rails/tree/main/activesupport/lib/a...


Here is a simple example. Ruby's start_with? method which is confusing because its bad grammar. So AS added an alias and called it starts_with? https://github.com/rails/rails/blob/main/activesupport/lib/a...

Why is this bad? Makes total sense to me. Now both forms work and hopefully it eventually makes it into the language like often happens. But until it does, why not monkey patch obvious mistakes?


Because Ruby has been consistent: there is no method starting with S.

Array has `include?`, Regexp has `match?` and String has `match?` and `start_with?`.

This is consistent: there is no S.

Now when I'm in a codebase, depeneding on ActiveSupport and which parts of it are included, the consistency is now missing. Is it with S or without S?

And what methods DO have the S version and which one do not?


It's a fair point about consistency but its still bad naming. AS accepts both so I don't see a problem. I do hope Ruby will eventually deprecate and rename it to includes? , starts_with? etc.


I don’t think you’re helping your case by holding out Devise as a problem example. Devise is extremely popular, works very well, can easily be extended to support different authentication modes like OAuth, and has saved countless hours for many people. In fact, I haven’t worked in the Ruby world for several years now, but devise is one piece of software that I frequently miss when using other languages.


Devise is well known to have many problems. It works out of the box, but when deviating from strictly the use case it was designed for, there is no documentation and the shortcoming of the design become apparent.

That's where the monkey patching comes in, usually.

But to be clear, I'm stating the facts that many software developer see in the Ruby world when you join an organization that's growing tremendously. As usual, if you fit perfectly in the Rails target and the Devise target, you won't encounter problems.


I don't see it sorry. I don't think Shopify crashes every monday because thousands of devs monkeypatch Object all the time or because AS adds 20 miliseconds to load time. Neither does Github or countless othet huge companies. I agree that after a certain size types make a lot of sense and Ruby or other dynamic languages becomes less attractive but ActiveSupport? Works well.


Shopify has very strict rules to prevent that and they have been working to address that problem for YEARS. They have actually worked on Packwerk (which they use) which enforces the usage of constants when crossing boundaries, rather than any form of interface or duck typing.

The load time of a single test is a recurring problem in the Rails world though, so ActiveSupport adds up.

Monekypatching doesn't lead to crash (it can, but that's not the point), it leads to: - Less extendable code - Less learning - Less maintainable code

Even if the damage is small, given how competitive is the world, why taking the chance for no benefit?


> Even if the damage is small, given how competitive is the world, why taking the chance for no benefit?

Well that's the crux of it here, you see it as damage/no benefit while others in the Rails world disagree. It's a difference of opinions that won't be resolved tonight by arguing on Hacker News. I did like your comment about naming consistency in Ruby though, learned something new.


Oh I love these yes please!


> or example asking whether a variable is present? (truthy, so empty str and empty [] should return false)

Actually this is kind of the whole point of #present? - it's not just a truthiness check.

The only falsey values in Ruby are `false` and `nil`, but in the context of a Rails app you might want to treat things like "" and [] as "not present" (e.g., a user leaves a text field blank).


I think it may be a cultural thing — I don’t have the Smalltalk blue book with me, but I think this type of modification at runtime is very common in Smalltalk environments and languages related to it.


Depending on the strain of web development you encounter, it's more or less common in Javascript. Part of that might be overlap with Rails people, but there's such a long history of having to patch broken and missing JS language features and standard library functions in incompatible browsers that monkey-patching is normalized under the name of polyfills.

Once you're used to doing that kind of thing, people will feel the urge to take things to the next level, augmenting standard functionality and replacing the original functionality globally. Maybe you replace console.log() with a wrapper that includes a timestamp in the output, for instance. Not insane, and manageable for a smaller project that you have your head around.

It falls apart awfully quickly as it scales though. If behavior in your system diverges from standard because of a maze of monkeypatching, then you have to learn those quirks or there is a lot of frustration. And the possibility of spooky breaking changes in seemingly unrelated components becomes a big problem.


While this is technically the same thing as the rampant monkeypatching that happens in ruby, there is a difference in practice, in that polyfills tend to be written to emulate well defined interfaces that simply aren’t there, whereas monkeypatching adds random magic like the aforementioned “weeks” method on numbers.

This is personal preference, but I find it utterly absurd to adorn numbers with these time dimensioning methods. Why not volume or length too? Should I be able to say 2.miles? Why would an integer be innately bound to time, as opposed to a unitless mathematical abstraction? Ruby folks like to add these gimmicks that make code “read like English” and I find it very cutesy, for lack of a better word. When I see stuff like this, I’m distracted by pondering what the hell the magic actually means. Can I say 6.from_now? Yes, actually, I can. Does it make any sense that what I call “from_now” on is technically a unitless number? No, not at all. It is a pretense that it’s a dimensioned number for the sake of cuteness.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: