If you are consuming an API that provides an object with a destructor, you are correct, you can determine when destructors will be called.
The issue is when you produce an API that contains objects with destructors. Since you are handing these entities off to unknown code, you cannot ensure that they will be dropped. This was a problem in scoped threads in Rust.
I was a little unclear but that is of course what I meant: talking about the underlying shared data because the pointers themselves don't have particularly interesting destruction behaviour. (Although the sibling is also correct that not all Rc/Arc/shared_ptr handles to the shared data with have their Drop called.)
I think that falls into the category I mentioned in the third paragraph of my comment: a serious pre-existing bug with other consequences will potentially cause the guarantee to be violated. A similar effect would happen if you had a double free that sometimes caused a crash, which is a similar level of programming mistake to creating a cyclic reference. To me it sits outside of a reasonable definition of "guaranteed".
No, typically, a reference cycle is fine. It results in valid memory that never gets read again, which is unfortunate but not dangerous, whereas double-frees can result in memory corruption. http://huonw.github.io/blog/2016/04/memory-leaks-are-memory-...
Python's "with" construct is analogous to the bracket pattern in Haskell that the article is talking about. It also works in the nested case in the presence of exceptions. Furthermore, the issue that Michael has with the bracket pattern in Haskell can also happen in Python.
True, but in Python the coding mistake would stand out much more because the with block is syntax sugar - it does not look like regular function application, whereas in the Haskell example there is nothing to tell you that withMyResource is using the 'bracket pattern' (except by reading the src)
Also I guess in Haskell there is more expectation that the type system should prevent you from expressing runtime errors
I can see why you might think that, being built into the language, using 'with' in Python in a broken way would be easier to spot. However, having used both languages extensively, I can tell you that, at least for me, there's no discernible difference.
I think the reason for this is might be that, in Haskell, a function starting with 'with' is, by convention, using the bracket pattern and the way that you might use such a function would be very similar in structure to the Python way.
Something that is often said about C++ is that, you're only ever using 10% of the language, but that everyone uses a different 10% and it's true, but it's true of every language to differing degrees. Everyone has their own way of forming programs, just like everyone has their own slightly different style of playing chess, cooking or forming sentences.
When you have a well developed style, you will quickly spot any deviations from it. At that point, it doesn't matter if your style was forced on you by the language or whether it's just a convention that you use.
It's certainly true that Haskellers expect a lot from the type system, even compared to other static languages, let alone Python.
I only mean it's visibly more obvious, you have an indented block... what is the purpose of the indented block unless to say "do all your stuff with the resource _inside_ this block". Using the with block is very 'intentional' feeling.
I'm not very familiar with Haskell but it seems like you'd get used to the type system telling you everything you need to know. But in this case it doesn't. In Python world we talk about 'pythonic/unpythonic'... it seems like it's maybe quite unhaskellish to have to rely on a naming convention and remembering not to use the return value of the function?
I would guess that's why the article and many of the other comments here focused on how you could express this behaviour in Haskell's type system, where you'd expect it.
In short: type system > syntax sugar > naming convention
> I only mean it's visibly more obvious, you have an indented block...
Haskell is more similar than you realise, it's the difference between this:
withSomeResource $ \resource -> do
someFunctionOn resource
and this:
with some_resource() as resource:
some_function_on(resource)
> I'm not very familiar with Haskell but it seems like you'd get used to the type system telling you everything you need to know
As an outsider, you might expect a type-error to mean that you made a logic error, in practice it usually means you made a typo.
What happens is that the type system forces you to write things in a certain way. You internalize its rules and it moulds your style. You don't try random things until they stick, you write code expecting it to work and knowing why it should, just like you would in Python. It's just that more of your reasoning is being verified. "Verified" is the operative word here - the type system doesn't tell how to do anything.
> it seems like it's maybe quite unhaskellish to have to rely on a naming convention and remembering not to use the return value of the function?
The Python equivalent of the problem here would be:
current_resource = a_resource
with some_resource() as resource:
current_resource = resource
current_resource.some_method()
So it's not that using the return value of the withSomeResource function is a problem, it's the resource escaping from the scope where it is valid.
I think the crux of our discussion is about checked vs unchecked constraints.
When you work on (successful) large codebases, whether in a static or dynamically typed language, there are always rules about style (and I mean this in a broader way than how your code is laid out). For example, in large Python projects, there might be rules about when it is acceptable to monkey-patch. These rules make reasoning about the behaviour of these programs possible without having to read through everything.
Large Haskell projects also have these rules, but Haskellers like to enforce at least some of them using the type system. It takes effort to encode these rules in the type system and it is more difficult to write code that demonstrably follows the rules than implicitly follows them, but the reward for this effort is that it gives you some assurance that the rules are actually being followed everywhere.
For some rules this extra effort makes sense and other times it doesn't. The type system is just another way to communicate intent. Writing the best Haskell doesn't necessarily mean writing the most straight-jacketly typed Haskell, but it does give you that option. Beginners often fall into the trap of wanting to try out the new-and-shiny and making everything more strict than is helpful.
For one-man projects, there's really no advantage to Haskell over Python (with the caveat that you may not remember all of the intricacies of your code in six months and using Haskell you may have encoded more of your assumptions in the type system).
with some_resource() as resource:
some_function_on(resource)
Is that broken? If some_function_on saves the resource, yes. If it just temporarily uses it, no.
I don't think the claim that it's syntactically obvious in Python is correct. In both cases the typical syntax helps a little but it's easy to get wrong.
It is the case that "the typical syntax" is a little more enforced by Python-the-language.
> The thing is that, in Haskell, even when you attach a function to run during destruction, the runtime doesn't guarantee that the function will be called promptly, or even at all.
However, this is different than the bracket pattern that the article is taking about. No one in the Haskell community advocates cleaning up resources (like file descriptors, etc) using only destructors.
You misunderstood me. I'm explaining why simply adopting RAII is inappropriate in Haskell, even though the author thinks it's a better approach. I've edited my comment to make this clearer.
Off topic, and IANAL, but I believe this website breaks European law by refusing to serve the article to european residents who block cookies.
Under the ePrivacy legislation (and GDPR's redefinition of consent), you must obtain "freely given consent" to use cookies that are not necessary for the proper functioning of the site (and under this definition, analytics cookies are not necessary).
By refusing to serve the site to those who opt to block cookies, they ensure that consent can only be given under duress.
Irrelevant to the main point. Since you're being childish I'll explain again.
Not being able to read a particular article or articles is not duress. Duress is if nautilus would threaten to send killer ninjas to your house of you won't accept the cookies.
You're right to complain that conquistadog's comment was irrelevant to your main point. It's frustrating when people miss what you are saying and get so hooked on trivialities. Incidentally, your nitpicking about my use of the word duress is also irrelevant.
Next, you call people childish, when you are acting immaturely. How does name-calling generally work out for you as a means for settling disagreements?
It also bugs me that you are not even technically correct. You see, I looked up the definition of duress before I posted. I am British, so I used the OED and it told me that in the legal sense of the word, duress is, "Constraint illegally exercised to force someone to perform an act." Based on that definition, I don't think I could have picked a word that would better suit my intention.
I agree with you for the most part here. For the things that I typically use computers for, I would prefer to have both hardware and software protected sandboxes. There's a reason that browsers are switching to using multiple processes.
I suspect you are being downvoted for being overly emphatic. I can certainly think of scenarios where having this extra security is more costly than helpful.
An interesting point of note is that the mill architecture has been designed to have much cheaper hardware protection than other architectures. [1]