Not really. Calling a C++ template-generated function is no different than calling a C function. If you have a lot of tiny template-generated functions then sure, it would be slow. But no slower than a C program with a bunch of tiny functions.
Minnesota does a significant amount of research on road surface materials. There is a section of I94 northwest of the Twin Cities that has three segments of highway, and traffic can be diverted onto one of the segments to test new road surfaces. Here's the MN DOT site with some test videos: http://www.dot.state.mn.us/mnroad/testcells/mainline.html (thrilling stuff).
Now that I'm in MA, I have heard several people say that concrete highways do not last through the winter, despite the large number of concrete-type highways in good condition in MN. Maybe this is the result of careful local road surface research? Roads are certainly better in MN than they are in MA, even though MN has colder and snowier winters. That could just be anti-highway spending sentiment from the big dig though...
Another Midwesterner transplant to MA here -- colder in the winter is actually better for highways. Up and down around the freezing point all winter like Massachusetts does produces massive frost heave that Minnesota roads just don't need to contend with.
Building good road-bed underneath helps enormously. Comparing the roads in Maine and Quebec, in Maine it's a mess of frost-heaves and constant repavement. In contrast, the Canadians rip up an old road down to ledge and build back a solid foundation - when a section of road gets rebuilt, it lasts 15-20 years.
I grew up in MN and always loved Honeycrisp, but after relocating I was surprised to hear people describe them as tasteless or mealy. It turns out they vary a lot depending on growing region. Cold hardy apples produce very different fruit in warmer climates. I have heard this is a big part of why the SweeTango brand has been limited to a small number of growers in select regions.
I think the idea of keeping apples to their proper growing climate is a crucial point: these new tasty apple discoveries occur in the UMN orchard, and once selected, a branded apple really will do well on the market if its quantity is constrained and only grown by a few farmers. So, with the UMN orchard as ground zero for the apple discoveries, it would seem that farmers nearer to the orchard have a huge advantage with regard to actually taking an apple from lab orchard to farm to market.
Tainting the identity of the Honeycrisp for example because it was grown in the wrong climate would be something any farmer growing Honeycrisps would want to prevent, else risk the bad impression and lower sales.
I just closed my account. Unfortunately, this requires sending an email to support@uber.com. It's inconvenient, but gives you a place to explain that you will not use the service until they comply with the ADA.
The "six distinct features of a functional language" are misleading/inaccurate:
1. Laziness: Not required. See Lisp, Scheme, ML, OCaml, etc.
2. Functions as first-class citizens: This is probably the only hard and fast requirement.
3. No Side Effects: Not required. Again, see Scheme, ML, and Rust.
4. Static Linking: Certainly not, but the author seems to mean static binding, which is more important. However, a functional language doesn't actually need any binding aside from function invocation (see Lambda Calculus). `Let` bindings are generally available and very useful.
5. No explicit flow control: Function invocation is flow control. Loops aren't functional, but some functional languages have them.
6. No commands/procedures: If the author means "no top-level function definitions" that is clearly not true. Some functional languages even have macro languages.
This article gives the (incorrect) impression that functional programming is about a set of restrictions you must follow 100% of the time. Functional programming is a style that can be used in any language, as long as you can pass functions around as values. It wasn't pretty, but Java <=1.7 could still support a functional programming style by using `Callable` objects.
The `map` and `reduce` operations are certainly possible in imperative languages. Python has them built-in, they can be written in C++, and so on.
There isn't consensus that Lisp, Scheme, ML, and O'Caml are functional. Period. There certainly is consensus everywhere that they aren't purely functional. Purely functional doesn't require nor implies lazy, but the two tends to go together for good reasons.
"Functional programming is a style that can be used in any language"
No, I'm sorry, it's not. [Purely] Functional programming provides guarantees and properties that are only valid if the necessary discipline is enforced. Your list of languages are merely procedural languages with functions.
They don't completely apply even for the languages the author talks about.
Those are more a list of Haskell features, if you replace static binding by dynamic binding. Even then, Haskell lets you kinda of scape most of those rules, except by #6.
There are quite a few extra issues that come up when you implement a usable allocator. I'm surprised the article didn't mention them. Here are just a few:
Alignment: different systems have different minimum alignment requirements. Most require all allocations are 8 or 16 byte aligned.
Buffer overruns: using object headers is risky, since a buffer overrun can corrupt your heap metadata. You'll either need to validate heap metadata before trusting it (e.g. keep a checksum) or store it elsewhere. This also wastes quite a bit of space for small objects.
Size segregation: this isn't absolutely essential, but most real allocators serve allocations for each size class from a different block. This is nice for locality (similarly-sized objects are more likely to be the same type, accessed together, etc). You can also use per-page or per-block bitmaps to track which objects are free/allocated. This eliminates the need for per-object headers.
Internal frees: many programs will free a pointer that is internal to an allocated object. This is especially likely with C++, because of how classes using inheritance are represented.
Double/invalid frees: you'll need a way of detecting double/invalid frees, or these will quickly lead to heap corruption. Aborting on the first invalid free isn't a great idea, since the way you insert your custom allocator can cause the dynamic linker to allocate from its own private heap, then free these objects using your custom allocator.
Thread safety: at the very least, you need to lock the heap when satisfying an allocation. If you want good performance, you need to allocate objects to separate threads from separate cache lines, or you'll end up with false sharing. Thread segregated heaps also reduce contention, but you need a way of dealing with cross-thread frees (thread A allocates p, passes it to B, which frees it).
The HeapLayers library is very useful for building portable custom allocators: https://github.com/emeryberger/Heap-Layers. The library includes easily-reusable components (like freelists, size classes, bitmaps, etc.) for building stable, fast allocators. HeapLayers is used to implement the Hoard memory allocator, a high performance allocator optimized for parallel programs.
I wonder of it's possible to do some statistical attacks by sending sample input and measuring response time? That is, after getting the response times for a million sorts, perhaps you can make inferences into the current PRNG state?
I suspect fluctuations in latency would make this exceedingly difficult, but I am constantly amazed by the statistical attacks people pull off.
Considering the fact that timing attacks based on memcmp short circuiting were shown to be remotely practical a decade ago, latency isn't going to be an issue. The bigger question is whether you can get down your inputs to a small enough number to practically determine PRNG state (you probably can).
Randomized allocation makes it nearly impossible to forge pointers, locate sensitive data in the heap, and makes reuse unpredictable.
This is strictly more powerful than ASLR, which does nothing prevent Heartbleed. Moving the base of the heap doesn't change the relative addresses of heap objects with a deterministic allocator. A randomized allocator does change these offsets, which makes it nearly impossible to exploit a heap buffer overrun (and quite a few other heap errors).
That paper only seems to mention heap overflows for purposes of writing to a target object that will later be used for indirection or execution. I don't see how it makes Heartbleed any better to extract a shuffled heap instead of a sorted one. What am I missing?
It's not just a shuffled heap, it's also sparse. Section 4.1 covers heap overflow attacks, with an attacker using overflows from one object to overwrite entries in a nearby object's vtable. Because the objects could be anywhere in the sparse virtual address space, the probability of overwriting the desired object is very low (see section 6.2).
The same reasoning applies to reads. If sensitive objects are distributed throughout the sparse heap, the probability of hitting a specific sensitive object is the same as the probability of overwriting the vtable in the above attack. The probability of reading out any sensitive object depends on the number of sensitive objects and the sparsity of the heap.
There are also guard pages sprinkled throughout the sparse heap. Section 6.3.1 shows the minimum probability of a one byte overflow (read or write) hitting a guard page. This probability increases with larger objects and larger overflows. You can also increase sparsity to increase this probability, at a performance cost.
An attack that reads everything is different from an attack that writes everything; 4.1 doesn't seem to understand that. The latter will just crash the computer like some kind of Core Wars champ. The former can copy out the whole heap! So a writing attacker has to worry about crashing the server or getting caught. A reading attacker can just loop, then run.
The guard pages I believe help---but random guard pages just mean I won't know quite what's protected and what is not. This last week I benefitted quite a bit from being able to reconstruct year old server memory layouts precisely.
In this case, I want a marginal chance of compromise no worse than 2^-192, about the strength of RSA-2048.
Daniel claims that volatility is actually lower with HFT when you look at implied volatility, a predictive measure of volatility. Lewis uses actual historical volatility in his argument. You shouldn't use a predictive measure when analyzing past performance:
Implied volatility is computed base on the difference between an option's selling price and an algorithmically-determined price. That's exactly the sort of information any trader would rely on to place bets. When the calculated value is higher than the price, there is profit to be made. Any decrease in trading latency will allow traders to buy up instruments selling below their estimated value a bit quicker, driving these numbers closer together.
That's not the only distinction being made here; the other argument is that Lewis is capturing a lot of factors other than HFT when considering volatility; for instance, the collapse of the collateralized debt market.
Actually if you look at the documents of the new exchange in question, IEX, they don't propose to do anything about HFT at all.
They are still a price-time priority matching exchange, they still allow colocation (well not directly, but they are themselves in data centers that have colocation), they still offer exotic order types, and their much hyped added latency amounts to 350 micro seconds. They aren't trying to remove HFT from the markets, they are trying to disadvantage one class of HFT in favor of another.
I personally don't have any problem with their goal, but they aren't being honest about it and that seems problematic.