- validate agressively at the system boundaries (IO- or user- facing stuff)
- inside the system, clojure implementation code isn't particularly nil-aware
- inside each namespace, the most important functions (typically their public part) are covered with pre/post conditions. This checks/documents the arguments/return type + (implicitly) non-nilness.
- pre/post conditions are implicitly exercised with integration tests.
There's clojure.spec/fdef if you prefer that to :pre/:post.
I prefer to expect nothing to be nil and assert for this rather than write defensively in case it is accidentally nil. This requires a more structured approach but is a better solution for me. I don’t like nil and prefer not to think of it as a valid value at all. Too many null pointer exceptions has ruined my appreciation even for the bandaids described in this article. It’s the only pain point I have in Clojure. I just think nil is a terrible idea in any language, and the way it is overloaded in Clojure is among the most troubling among languages, in my experience.
Say you're making an application that tracks corporate fraud and one of the changes that you need to make is to include the date of incorporation onto the corporation model.
For the existing corporations in your database what do you do?
What I see people do in the wild is to use a value like 1900-01-01. Or they make a ton of tables so that the nil is now just a non-existent key instead of a non-existent value.
I dislike nil as well, and I try to avoid it if possible, but I can't understand how to handle certain situations without it or without re-creating it.
I use an explicit value to represent the absence of something specific like :id-unknown which is much more useful and safer than just throwing nil in there when a project scales.
If something is nil in Clojure, you have to ask why is it nil? The answer is not always simple. It could be a bug, it could be legit missing data, a number of things.
A simple typo in Clojure can give you nil. You don’t want to interpret that the wrong way.
Certainly not. If you are missing the code to specifically handle a null value, you hope you get a noisy explosion, not a runaway train in your program's logic. Fail-fast is a virtue.
This (anti-)pattern has a Wikipedia page, complete with Criticism section.
Incidentally, Objective-C does something like this out-of-the-box. Unlike Java, where invoking a method on null causes an exception to be thrown, or C++, where you just get good old undefined behaviour, Objective-C 'defaults' to doing nothing. This is described in the Wikipedia article. Strikes me as a very bad idea.
Well in terms of system stability I’ve usually found OS X and Obj-C GUIs to be much more stable than on any other platform. Objective C’s null handling could well be part of that. If so, what’s better for GUIs then?
As I said, explicit fail-fast behaviour is what you want. It's far better that a bug manifest noisily and immediately, than that it go unnoticed.
At the risk of just re-stating myself:
In Java, if you attempt to invoke a (non-static) method on a null reference, it throws a `NullPointerException`. This is exactly what you want: it's immediately obvious to the developer, and to a user. No further damage can occur as a result of the bug. (Ignoring broader questions of exception-handling, of course.)
Java pretty consistently adopts this philosophy of runtime checks everywhere, even on production builds. C#/.Net does the same. It's a valuable feature for software correctness.
(Java's choice of exception name is curious given that Java has references and not pointers, but still, they have the right idea.)
C++ is the opposite: if you invoke a non-static method (or 'member-function') on `null`, you get 'undefined behaviour'. Anything could happen. Hopefully the program will crash with something akin to a 'segfault', but maybe it will do something disastrous.
Unlike Java, the C++ philosophy is generally performance first, and it does few runtime checks for you. C++ has its reasons, but this approach is extremely unhelpful for software correctness; it can make both bug-detection, and bug-hunting, much harder.
(Incidentally, GCC and Clang feature a `-fsanitize=null` flag which gives you Java-style runtime checks, presumably with a moderate runtime performance cost. This isn't in the C++ standard, though.)
Objective-C is somewhere in the middle: you don't get undefined behaviour, but neither does the system tell you when you've got a null-dereference bug. It just silently carries on, and you're left assuming that your code is working fine.
There's nothing at all special about GUI code. Fail-fast is still what you want. There could be any number of technical or non-technical reasons you've found OS X GUI applications to be more stable. Null handling is just a tiny piece of the picture.
> Objective-C is somewhere in the middle: you don't get undefined behaviour, but neither does the system tell you when you've got a null-dereference bug. It just silently carries on, and you're left assuming that your code is working fine.
And the program often keeps running and generally working despite some potentially rare-null related bug occurs.
For a contrived example, imagine a user tries to paste a text field into a table and the text has an emoji in it. Let's further imagine the code doesn't handle full unicode properly, and the internal function handler returns a NULL, breaking the data model. In the die-on-null case the entire app often dies since the devs didn't think to add a try/catch around this case, thinking user strings from the UI object could never be null. Now the user has lost an entire table of data they've entered and don't know why for sure. The app just died when they pasted a bunch of data.
In the case of Obj-C with "ignore-null" behavior the data model would effectively ignore the field and likely result in an empty field. The user would still be able to save the rest of their data, with the curious exception of this field. They try pasting just that field and realize it has an emoji. Maybe they'd file a bug that emoji's don't work but they work around and enter a text smiley face. Of course many other things can/could happen like weird data corruption. But I've experienced very similar occurrences in apps and worked around them without data corruption.
Now as a developer, it's harder to find the reason as you don't have a nice log with "app crashed on NPE", but your users can still find a work around and often still find the app useable.
This scenario still requires defensive programming around sub-system interfaces to prevent said errors from reaching, say the internal embedded db, though that's already usually the case in a well designed code base.
> There's nothing at all special about GUI code. Fail-fast is still what you want. There could be any number of technical or non-technical reasons you've found OS X GUI applications to be more stable. Null handling is just a tiny piece of the picture.
For general systems code, I'd agree with you on null detection and fail-fast behavior. As a developer it's much better to know sooner when an error occurs. Granted null handling is just a small component of an overall system, it is an important component and NPE's are some of the most common errors and are most likely to cause an entire app to die due to unhandled/unexpected nulls.
However, from a simple analysis as an end-user of various GUI systems I prefer that some random probably unused feature silently fails or does weird stuff but that the main program keeps running, rather than an entire application (or worse the whole DE) fail due to an un-handled NPE.
And I disagree that there isn't anything unique about GUI code. It's a very different domain than systems / server applications. It should therefore utilize coding patterns which produce better results for that target domain.
I believe the point is that IF the problem with nil is that when it shows up you can't be sure whether it was on purpose (to indicate an "empty" value) or on accident (because an error in the code caused a variable to not get bound). By introducing a separate identifier (e.g. :id-unknown) to explicitly handle the empty-value case, it frees up nil to solely signify improperly bound variables.
You have to deal with it in some way. If a function should return a boolean and something bad happens then it can return Union<boolean, failure> or Optional<boolean> or nullable boolean or throw an exception. The difference is basically in compiler assistance. Null pointers are bad because the lack of good static analysis support.
But you are talking about a statically typed way of handling nil. Options and Unions are not nil, which is the point. But you don’t get this in Clojure and nil can mean a lot of different things in different contexts, which is more complicated than a simple optional.
So how _do_ you handle nil without those things? You've said you try to just avoid it, but many functions return nil and there's no easy way around it (e.g. trying to get the max value of an empty list)
AFAICT, everywhere where nil could be sensibly defined as zero without complicating interop, it is. And we can of course pave over the interop examples with `fnil`.
Paving over the difference could seem convenient but it smells like a possible source of non obvious coding errors. AFAIK it's still worthwhile to know the difference between function identity and the value that produces identities.
Though, lots of this is inherited from common lisp where the list operations have more to do with lambda calc and the physical construction of the old lisp machines than they do anything else.
So, I’ve programmed in Haskell and Clojure and I really wish people would give over on the Haskell requires category theory meme.
What Haskell offers is _generalisation of abstraction_. So, the function that satisfies the behaviour of fnil is called fmap, but it’s also the function that maps a function over all the values of a list, or the return values of a function. Equally, the equivalent of some->> (do notation) works for all of the above and more.
None of this requires you to know what a Kliesi category is.
This is an important point to stress. However the moment people want to get more complicated than bind and return while doing IO the "C" word inevitably creeps into scope.
So while you don't need to know anything about abstract algebras to do Haskell, that knowledge is lurking so close to the surface that it is really more of an issue than is generally acknowledged.
So, I’ve programmed in Haskell and Clojure and I really wish people would give over on the Haskell requires category theory meme.
What Haskell offers is _generalisation of abstraction_. So, the function that satisfies the behaviour of fnil is called fmap, but it’s also the function that maps a function over all the values of a list, or the return values of a function. Equally, the equivalent of some->> (do notation) works for all of the above and more.
None of this requires you to know what a Kliesi category is.
These three paragraphs are unintentionally enigmatic and emblematic! Literally, the only thing I understood in the 2nd and 3rd paragraphs was, "What Haskell offers is generalization of abstraction." It's as if some large fraction of the Haskell community is significantly lacking self-awareness of how its own PR comes across. Been there and done that, just with Objects.
EDIT: Which is to say, A community that engages in such lack of self-awareness is dooming itself to hipster obscurity.
(A function that reads a string for an int and adds one to the result.)
So fmap is polymorphic over a whole bunch of things that are unrelated in most other languages. (You might be asking why it’s called fmap and not map, and I wouldn’t have a good answer for you.) So, the abstractions of “Nullable, List and Function Return Value” are all part of a more general abstraction, Functor.
Now, at this point some people try to look up what a category theoretic definition of a Functor is. That’s fine, more power to their elbow. However, really all a Functor is in a practical sense for a dumb programmer like me is _something that supports
fmap_.
My point is, it’s this stuff that’s useful and cool in Haskell. It won’t help you understand whatever they’re talking about on r/Haskell. And my only point about Kliesi categories is that you don’t need to know what they are. There’s a million and one interesting computer science concepts that are usable with a type system with Haskell’s properties, but you don’t have to use them all.
It's the coding scenes that do a good job on PR that win. "Amateurs study tactics, while professionals study logistics." -- I think there's something analogous with programming languages and ecosystems/communities.
When taken in the context of programming in Haskell, category theory is not really that important IMHO. Yes, the words are strange, but once you know what they mean, they are just words. For example (again, in the context of programming) a functor is just a collection with a function that removes the contents out of the collection, transforms it (into anything) and puts it back into the collection. A monoid is just a functor (ie collection) that can hold zero or more values. A monad is monoid (collection that can hold zero or more values) with a function that takes the contents out of the collection, transforms it into something of the same type and puts it back into the collection.
Category theory is really just useful because it shows you the building blocks. Some functors are not able to be used as monads. It's useful to have some intuition as to why. Even some things like applicative (a functor with a function in which you can successively apply multiple parameters) pop out and give you some useful insight (non-applicative functors can't be monads -- it's not obvious why until you try to implement it).
There is value in this organisation. The words are not fantastic, but again, they are just words.
Whether you need to know category theory or not, the reality is that Haskell is a much more complex language than Clojure. My team works with Clojure, and we regularly hire co-op students who typically have no exposure to FP. They're able to to start writing useful code within a couple of weeks or so on average.
Unfortunately, that doesn't match my experience at all. Haskell requires understanding many more concepts to use effectively than Clojure. Lazy evaluation, large syntax, and the advanced type system all add complexity. My experience is that it takes people a long time before they're able to read and write idiomatic Haskell code without assistance.
With Clojure, we're able to do a very quick ramp up, and then have new hires write code with very little assistance from the rest of the team. I simply haven't seen this be the case with Haskell even for experienced developers.
I suspect the problem is mentoring. It took me months to get basic Haskell.
The people I mentored, though, could clarify any misunderstanding and get explanations from multiple viewpoints from me -- after years of experience with these abstractions. With such mentoring, you don't have to go through all the confusion phases.
The core point here is that Haskell is a more complex language that requires understanding and applying more concepts to write effective idiomatic code. Mentoring does help, but you still need to build a mental model of using the language, and there isn't a shortcut for that. Clojure requires a smaller mental model than Haskell, and that makes it easier to learn. The end result is that you have to spend less time ramping people up.
If Haskell works for your team that's great though, it did not work for mine.
I think the thing is, if you want to write at a Clojure-level-abstraction, Haskell won’t stop you doing that and I get the impression that it’s actually the way a lot of Haskell programmers operate. (Chas Emerick has recently been advocating this approach and I get the impression it’s actually the style of GHC itself.)
Clojure people use the untyped equivalent of row types, i.e. being able to arbitrarily add and remove fields to and from a record. That would be nice to have in Haskell (though personally it's nowhere near a dealbreaker for me).
"Generalization of abstraction" sounds a lot like "a maze of twisted passages, all alike"
That's exactly what I don't like about some languages, if everything is a function, then it's all a big ball of mud. The only thing you can do with a function is call it.
I'd rather have classes of capabilities. Some things are callable, others are iterable, some are printable. But if it's truely about abstraction generalization, that sounds like a mess.
> if everything is a function, then it's all a big ball of mud.
Not really, each distinct function will still have a distinct type. (This is largely why I prefer ml-family languages over lisps)
>The only thing you can do with a function is call it.
You can also abstract over it, and pass it around. Which allows you to build whatever you want, numbers, booleans, if-then-else, etc.
The combination of function as your basic unit of abstraction and types as the differentiating descriptor is kind of the opposite of a mess, as you have a correspondence to familiar logic operations.
A function type A -> B is implication (given A, we have B), a product type (A,B) is conjuction (A and B), a sum type A|B is disjunction (A or B). (Sure, the logic isn’t necessarily sound, but it’s still useful).
When you have the fundamentals down you can build whatever capability system you need on top of solid abstractions.
I’m currently working on an ETL-project in haskell and it’s structured around a similar capabilities divide as you describe, defined by typeclasses/interfaces, it’s just all functions.
One problem with using or is if the key is associated with false, then it will be ignored in favour of the default. Keywords support a second argument which is the default to use on not-found so you can just write:
Null / nil is the gaping hole in the safety net of static typing. I dislike and distrust any language where null / nil is a thing. Yes, that includes SQL, which otherwise would be a great language.
At the same time, I've found Haskell to be too mind-bending to use for any but the most trivial of tasks. (Eg. monad pyramids and the most cryptic error messages in the history of CS!)
Fortunately I've found a language that covers all bases: OCaml. It has static typing with full inference; algebraic datatypes; eager evaluation by default (yes, I believe Haskell's default of lazy to be counter-productive); no null / nil value; fast compilation to machine code and to Javascript; a compact syntax; and good editor support, including a modern gofmt-inspired auto-formatter.
This is a good article. One gotcha I ran into using fnil is that you don't get variable-arity functions for free from it in Clojure (YMMV in cljs).
This caused an issue when I was using (fnil + 0 0) in a reduce function - as it would barf on an empty collection - I got around this by providing an init value to the reduce call, but it caught me off guard when (reduce + []) works
Nil-checking is a form of input validation. Validation features often leak into helper functions lower down which don't have the context or control to solve the source of a problem, where they don't belong. Separation of concerns means assigning management of risk to the bearer of profit (the calling code), where it can "fail/retry/failover" fast, before leaking validation code into functions that could be pure.
Therefore, I prefer to hoist missing value handling up to the caller, akin to "smart vs dumb components", where the caller has sufficient context and control to recover from errors. For example,
Instead of:
(defonce bus (ZeroMQ. "some-url")) ;; this might be null, but it'll leak into helpers that should only care about values
(defn transmit! [bus payload]
(if bus ;; why???
(if payload) ;; gross
(.send bus (zmq/encode-string payload))
(log/error "nil payload whoops")
(log/error "something bad happens that the caller doesn't understand and can't do anything about")
(defn tick! [ms]
;; athe wrong place to handle initialization problems
(if-let [input (.read bus buffer-length)] ;; multiple failure modes
(transmit! bus [:some (inc input)])
(comment "do nothing")))
Instead, handle initialization and input validation nearer the source and pass in system components explicitly to simplify mocking and testing:
(defn transmit! [^ZeroMQ bus payload] ;; type hint
{:pre [bus payload]} ;; basic guards
(->> payload ;; notice lack of nil-checking. you have to enforce contracts somewhere
(zmq/encode-string)
(.send bus)))
(defn init! [{:as config :keys [zmq-url]}]
{:bus (ZeroMq. zmq-url)
:redis ...
:logger ...})
(defn tick! [{:as system :keys [bus]} ms]
(if-let [parsed (parse-input (.read bus buffer-length))]
(transmit! bus [:some (inc parsed)]) ;; parse-input always returns valid data
(log/error "it's fine to handle runtime parsing errors here"))
(defonce !running (atom true))
(defn -main []
(let [system (init! {:some "config-url"})] ;; attempt bus failover here
(if-let [bus (:bus system)]
(go-loop []
(if @!running?)
(do
(tick! (time/now))
(Thread/sleep 30)
(recur))
(log/info "Shutting down..."))
(log/error "Retrying serial bus connection... ."))))
Are you suggesting that there is a meaningful difference between (:firstname record) and (.-firstName record), or (func x) and (.method x), the first of which is nil safe and the second isn't? (It doesn't seem you are, because you're focused on the ifs, but nil punning is about eliding ifs)
Hi Dustin. No, those function/method calls are equivalent.
My central point is that it is not reasonable to expect the consumer of a value to recover from a failure mode that was generated at the producer. Rather, it is better to handle failover and retries at the producer. Of course, once a (valid) value has left your system, any downstream errors are the purvue of the subsystem, but you may want to provide a signalling pathway back to the producer if it can be resolved upstream.
This naturally results in writing thinner consuming functions and "fatter producers" or "smarter callers."[^1]
nil doesn't have to be a failure mode, if we are programming with data it can just mean "empty". The trouble, I think you would point out, is that (.send bus payload) doesn't have a sensible definition of empty – because the bus reference is imperative/stateful/object oriented, not data oriented. Nil punning can't help with foreign interop to imperative systems, but that doesn't mean nil is the root cause of the pain. The imperative dependency is.
There are as many kinds of nils as programming languages.
Cixl [0] doesn't derive nil from every other type, instead Nil and most other types derive Opt; which means that user code may specify Opt to accept nil or any other type to have them automatically trapped by function dispatch. I find this to be a nice compromise between wrapped optionals and shooting from the hip.
yep, being on alert for nils should not be a human task, it shouldn't just be a convention because it is a trivial task that the type system can handle and enforce.
The article is good and the author is right, but it should not be the author's problem in the first place.
Yes, in my experience it does. In my projects I fdef all my functions and then turn on instrumentation (with the Orchestra[0] library, so that it also checks :ret specs). It helped me find several instances where I was incorrectly assuming that I would never get nil as an input.
Couple it with Expound[1] to make reading the spec errors a bit easier, and you've got a much better story for handling nil in Clojure.
> Clojure has a function that does basically the same thing. The get function will return nil if a key in a dictionary isn’t present:
Isn't that already what happens when you use the map as a function directly? Even the sub-feature of being able to provide a default value if the key is not found is supported.
That's nil punning, which the article specifically does not care for if not outright decries in its conclusion:
> Over at Lispcast, Eric Normand argues for the “nil-punning” approach, which is fine. But I think this approach requires a confused notion of what nil/Nothing actually means. […] It is much simpler to understand nil as Nothing, i.e. the absence of a value (which is a type).
Assuming that, the behaviour of `get` is specifically one you do not want.
Furthermore, calling the map actually behaves as "Haskell and other ML-ish languages". `Data.Map.lookup` takes a Map, not a Maybe Map.
‘get’ is a function that will always exist, but using a map in function position is very fragile and everything breaks if the map itself is nil. If you advocating for this, it is contrary to general consensus in the community; you will almost never see this in real code.
Responding to your edit: it’s safe to do it in Haskell because you know it isn’t nil at compile time.
> ‘get’ is a function that will always exist, but using a map in function position is very fragile and everything breaks if the map itself is nil.
It's no more fragile than any other function which doesn't do nil punning, and again the essay we're supposedly discussing advocates not using nil punning.
All I'm pointing out is that for the purpose of the article there seems to be no difference between using the map as a function and using get, and in fact that using the map directly is more suitable to the essay's espoused philosophy of avoiding nil punning and treating nil as a hole/error rather than a normal value.
The problem is that ‘get’ possibly creates one nil result. Using the map as a function, even if nil is understood to be an error, now gives you two possible and very different errors from one expression. It’s a bad idea and generates added ambiguity and complexity, which further strays from the article’s philosophy.
Yeah, Common Lisp’s gethash is better than get in most other languages because of Common Lisp’s support for multiple return values.
(gethash key hash-table)
Returns two values: the first one is the value found in the hash table, or nil if no value was found. The second one is nil (i.e. false) if no value was found and non-nil (maybe t? I forget what the standard says) if a value was found.
It's the style used by Go and certainly better than Java/Ruby/JS (which just return null/nil/undefined on a missing key, same as clojure).
My personal ranking would be along the lines of:
1. Option types, that only makes sense for statically typed languages but it's the most clear and helpful and all other styles can trivially be composed from that.
2. Erlang-style, that's less the API which is similar to (though less forgiving/error-prone than) CL/Go and more the language: find/2 returns `{ok, Value} | error`, so to actually get the value you must either assert that the retrieval succeeded by pattern-matching on `{ok, Value}` directly (faulting if that didn't work) or properly pattern-match on both cases, and it doesn't give the illusion of a value on failure. Erlang also provides (3) and (5) via get/2 and get/3.
3. Python-style, the default is to raise on missing keys which makes it unmissable that the value was not there, alternatively provides a variant of (5) which can fairly easily be used as a (4) via the get method (return a placeholder value which can't have existed in the map, check against it by identity).
4. Common Lisp & Go, both will implicitly ignore the presence flag by default (bind result to single value) and just return a default on miss in that case. Common Lisp has an edge because Go's statically typed so it could make better use of MRV, and CL has much better abilities to manipulate and reify MRVs (only thing you can do in Go is bind them).
5. Clojure, Ruby and (finally) Java >= 8: a missing key returns a valid default[0], but it's also possible to customise that default (oddly enough in Ruby the same method which lets you get a non-hash-global default also lets you opt into (3), but I don't think it's used much?). For Clojure and Ruby this lets you emulate a (4) the same way Python does.
6. Java < 8/JS: a missing key returns a valid default (null/undefined) and there's no way to differentiate "there was no value" and "this was a value" save by a second access (in/contains).
[0] as in the default they return could have been a valid value all along
I really like Common Lisp's solution because it doesn't force boilerplate on you. If you care whether or not the key was present, the second return value is there and, otherwise, you just continue on your way.
Also, since MRVs are a language construct, you can abstract over them to implement the equivalent of Haskell's >>= / maybe or to throw on a missing key. So, I'm not really sure there's a decisive advantage of Options vs. the MRV solution. In fact, if you change your perspective a bit, Options (and possibly sum types in general) are just ways of getting multiple return values in languages that only support single return values: i.e. you could think of the case of the sum type (i.e. "Some" or "None" being one of the return values and the wrapped value(s) being another.
> I really like Common Lisp's solution because it doesn't force boilerplate on you.
Checking what needs to be checked is not boilerplate.
> If you care whether or not the key was present, the second return value is there and, otherwise, you just continue on your way.
Which is the entire issue. By default, if you're not careful or don't check, you just get garbage without warning.
> Also, since MRVs are a language construct, you can abstract over them
No, MRV being a language construct does not allow that at all (you can't do any such thing in Go because MRVs are language structures rather than properly reified). You can abstract over MRVs in CL despite them being MRVs, because CL specifically provides a bunch of dedicated constructs for working with MRVs.
> So, I'm not really sure there's a decisive advantage of Options vs. the MRV solution.
That options force the developer to deal with the issue and do so by default.
> In fact, if you change your perspective a bit, Options (and possibly sum types in general) are just ways of getting multiple return values in languages that only support single return values
Utter nonsense, MRVs are products. The entire point of sum types is that they're not, their purpose is to provide alternatives in a type-safe manner. The way to get multiple return values in "languages that only support single return values" is tuples.
> you could think of the case of the sum type (i.e. "Some" or "None" being one of the return values and the wrapped value(s) being another.
> No, MRV being a language construct does not allow that at all (you can't do any such thing in Go because MRVs are language structures rather than properly reified).
Oops, I meant "because MRVs are a first-class language construct in CL (if only because of macros), you can abstract over them ..."
Secondly, tagged unions are a very standard way of simulating sum types in languages that lack them. All I was saying is that MRVs can be used in a similar way. E.g., an option can be replaced by a pair of values, one of which indicates whether or not the value is present and the other indicates what the value was.
As far as the boilerplate issue goes, if you really need to make sure that a field is present, you should be using hash tables in CL, you use a class and when you try to access an unbound slot, it throws.
Finally, it's just wrong to say that Options are only useful in statically typed languages: any _strongly_ typed language can use them effectively, whether or not those types are checked at run time or compile time.
- validate agressively at the system boundaries (IO- or user- facing stuff)
- inside the system, clojure implementation code isn't particularly nil-aware
- inside each namespace, the most important functions (typically their public part) are covered with pre/post conditions. This checks/documents the arguments/return type + (implicitly) non-nilness.
- pre/post conditions are implicitly exercised with integration tests.
There's clojure.spec/fdef if you prefer that to :pre/:post.