> Why would anyone pick the flexible/potentially-insecure option?
Because having a connection that's encrypted between a user and Cloudflare, then unencrypted between Cloudflare and your server is often better than unencrypted all the way. Sketchy ISPs could insert/replace ads, and anyone hosting a free wifi hotspot could learn things your users wouldn't want them to know (e.g. their address if they order a delivery).
Setting up TLS properly on your server is harder than using Cloudflare (disclaimer: I have not used Cloudflare, though I have sorted out a certificate for an https server).
The problem is that users can't tell if their connection is encrypted all the way to your server. Visiting an https url might lead someone to assume that no-one can eavesdrop on their connection by tapping a cross-ocean cable (TLS can deliver this property). Cloudflare breaks that assumption.
Cloudflare's marketing on this is deceptive: https://www.cloudflare.com/application-services/products/ssl... says "TLS ensures data passing between users and servers is encrypted". This is true, but the servers it's talking about are Cloudflare's, not the website owner's.
Going through to "compare plans", the description of "Universal SSL Certificate" says "If you do not currently use SSL, Cloudflare can provide you with SSL capabilities — no configuration required." This could mislead users and server operators into thinking that they are more secure than they actually are. You cannot get the full benefits of TLS without a private key on your web server.
Despite this, I would guess that Cloudflare's "encryption remover" improves security compared to a world where Cloudflare did not offer this. I might feel differently about this if I knew more about people who interact with traffic between Cloudflare's servers and the servers of Cloudflare's customers.
Let's encrypt and ACME hasn't always been available. Lots of companies also use appliances for the reverse proxy/Ingress.
If they don't support ACME, it's actually quite the chore to do - at least it was the last time I had to before acme was a thing (which is admittedly over 10 yrs ago)
To me it reads like there was a gradual rollout of the faulty software responsible for generating the config files, but those files are generated on approximately one machine, then propogated across the whole network every 5 minutes.
> Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.
There's a whole paragraph in the article which says basically the same as your point 3 ( "glass bouncing, instead of shattering, and ropes defying physics" is literally a quote from the article). I don't see how you can claim the article missed it.
From the article, it looks like the problem is partially caused by significant parts of the transmission network being temporarily shut down due to ongoing upgrades. These could probably have been started slightly sooner, but they are already underway, so I don't think your point is weel supported.
Except that if those upgrades had been started 10 years earlier then there would have been lower demand (10 years less growth in demand). The reductions in capacity would have had a much lower effect on prices.
Fonts can be complicated. Using nonsense like [1] (specifically contextual alternates), you could have the glyph for the first letter of a word depend on the last letter. I don't think you could get that to work for all letters in an arbitrary length word, but making a font that works for all words shorter than say 20 characters should be doable.
Angular size is proportional to size/distance, so the calculation you're trying to do is correct; however, 50700 km is more than 1e7 meters so the angular sizes differ by about 3 orders of magnitude.
> To calculate the energy consumption for the median Gemini
Apps text prompt on a given day, we first determine the average
energy/prompt for each model, and then rank these models by
their energy/prompt values.
We then construct a cumulative
distribution of text prompts along this energy-ranked list to
identify the model that serves the 50-th percentile prompt.
They are measuring more than one model. I assume this statement describes how they chose which model to report the LM arena score for, and it's a ridiculous way to do so - the LM arena score calculated this way could change dramatically day-to-day.
Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it.
By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.
Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture.
None of what you've said contradicts it's a general graph instead of, say, a DAG. It doesn't rule out cyles either within a single layer or across multiple layers. And even if it did, the brain is not just the neocortex, and the neocortex isn't isolated from the rest of the topology.
It's a specific architecture. Of course there are (massive amounts) of feedback paths, since that's how we learn - top-down prediction and bottom-up sensory input. There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Yes, there is a lot more structure to the brain than just the neocortex - there are all the other major components (thalamus, hippocampus, etc) each with their own internal arhitecture, and then specific patterns of interconnect between them...
This all reinforces what I am saying - the brain is not just some random graph - it is a highly specific architecture.
Did I say "random graph", or did I say "general graph"?
>There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me.
I didn't say anything about back-progagation, but if you want to talk about that then it depends on how "analogous" you want to consider ...
It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn.
Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence.
So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not.
Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that.
Sure, a real neuron activates by outputting a train of spikes after some input threshold has been crossed (a complex matter of synapse operation - not just a summation of inputs), while in ANNs we use "continuous" activation functions like ReLU... But note that the output of a ReLu, while continuous, is basically on or off, equivalent to a real neuron having crossed it's activation threshold or not.
If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture.
OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage.
If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware.
People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.
Because having a connection that's encrypted between a user and Cloudflare, then unencrypted between Cloudflare and your server is often better than unencrypted all the way. Sketchy ISPs could insert/replace ads, and anyone hosting a free wifi hotspot could learn things your users wouldn't want them to know (e.g. their address if they order a delivery).
Setting up TLS properly on your server is harder than using Cloudflare (disclaimer: I have not used Cloudflare, though I have sorted out a certificate for an https server).
The problem is that users can't tell if their connection is encrypted all the way to your server. Visiting an https url might lead someone to assume that no-one can eavesdrop on their connection by tapping a cross-ocean cable (TLS can deliver this property). Cloudflare breaks that assumption.
Cloudflare's marketing on this is deceptive: https://www.cloudflare.com/application-services/products/ssl... says "TLS ensures data passing between users and servers is encrypted". This is true, but the servers it's talking about are Cloudflare's, not the website owner's.
Going through to "compare plans", the description of "Universal SSL Certificate" says "If you do not currently use SSL, Cloudflare can provide you with SSL capabilities — no configuration required." This could mislead users and server operators into thinking that they are more secure than they actually are. You cannot get the full benefits of TLS without a private key on your web server.
Despite this, I would guess that Cloudflare's "encryption remover" improves security compared to a world where Cloudflare did not offer this. I might feel differently about this if I knew more about people who interact with traffic between Cloudflare's servers and the servers of Cloudflare's customers.
reply