So a mishmash of ideas from other papers(Not to downplay the results). This is exciting times of hackery and basically using puzzle pieces and piecing together stuff.
This is the kind of stuff that can only be done so quickly by having more and more people brought into the field to try these ideas out. The more people the more permutations of ideas.
I assume he means with the encrypted metadata in HTTP/3 / QUIC that it makes it harder as a security admin to "peek" at what is going on in the network.
In my opinion its short sighted, because if we care about security, then we should care about user security and privacy as well. Because if the security admin has the ability to packet inspect stuff, so does a potential malicious app.
SSH3 is a complete revisit of the SSH protocol, mapping its semantics on top of the HTTP mechanisms. In a nutshell, SSH3 uses QUIC+TLS1.3 for secure channel establishment and the HTTP Authorization mechanisms for user authentication.
So, it has nothing to do with SSH2; more about HTTP/3-QUIC security theater: hostname is still being sent over TLS/1.3 negotiation.
To be clear, my reading of the parent post is that the grandparent doesn't like HTTP/3-QUIC making it harder to read data off of the wire (ie: for internal security analytics).
But I don't see how this is worse than SSHv2. In both cases retrieving the hostname / IP is obviously trivial since you just instrument DNS for the hostname and, of course, the IP is cleartext.
For context Terence Tao is often referred to as one of the greats of modern mathematicians of our time.
He also has a Mastodon account where he sometimes goes over the implications of LLMs and in this post is his musings of how he sees it's current potential and possible impact in the near future for mathematics.
"The 2023-level AI can already generate suggestive hints and promising leads to a working mathematician and participate actively in the decision-making process. When integrated with tools such as formal proof verifiers, internet search, and symbolic math packages, I expect, say, 2026-level AI, when used properly, will be a trustworthy co-author in mathematical research, and in many other fields as well."
Thanks, I just hope this is sustainable. I like newsletters and the site design is simple and basic.
I hope there's some way you can set up like a monthly contribution of like $1-$5 to pay for some of the hosting/API costs.
The main concern though for me is long term vision for the site. I'd really like it to stay user/community oriented and hopefully the users can pay for the costs with donations/contributions.
Thanks - I appreciate the concern about the costs to run the site, but they're minimal for now and I'm more than happy to cover it. Might eventually try to monetize the site and upcoming newsletter, but there are better ways to do it than charging readers.
It seems to me that they heard a suggestion for the site to be made available only to people who pay and the suggester was instead suggesting that there be a donation / "buy me a beer" link.
Indeed, there are ways to monetize such a thing that exclude both charging all of your users and selling advertisement space.
AI can efficiently teach your daughter how to solve math problems, but as a human, you can provide her with the context and understanding of why and what she's learning. AI is great at handling the technical aspects, freeing us to focus on cultivating our uniquely human qualities of empathy, creativity, and critical thinking. Letting the AI teach the how and the human teach the why and what can lead to a more holistic and enriching education.
I'd like to agree, but it does seem like AI is getting better rapidly at providing the context and the "why" too (as well as creativity and critical thinking)
the language models can tell you how to do a fourier transform, and then you can have a conversation about what it all means, why it's important, the discovery, etc
[me] why are fourier transforms relevant to learn? what is the history behind them
[bing] Fourier transforms are relevant to learn because they are used in many fields such as signal processing, physics, and engineering. They are used to convert a function from the time domain to the frequency domain, which can help to identify the frequencies present in a signal and in what proportions. The Fourier transform was first introduced by Joseph Fourier in 1822 as a way to solve the heat equation. Would you like more information on the history of Fourier transforms or its applications?
Me with not much background experience in programming not knowing what RoR stood for.
Me asking ChatGPT:
In this context, RoR most likely refers to Ruby on Rails, which is a web application framework written in the Ruby programming language. The second comment is suggesting that people had similar dismissive comments about Ruby on Rails (RoR) when it was first introduced, but it has since become a widely used and respected framework in web development. The comment implies that similar trends may apply to AI tools used for web design.
>>A dog can learn. A cat can learn. And they don't learn like a human would.
Yes, but they learn a LOT MORE like a human would vs these machine models. Cats & dogs share the same underlying structure, from the neuron/synapse/neurotransmitter system, up to the brainstem/cerebellum/midbrain/cerebrum architecture, as well as being inextricably integrated into a living body and sensory system, and growth pattern.
And, as you say, there are big differences in how we all learn. But those differences are utterly trivial compared to the differences between humans and ML.
This is the kind of stuff that can only be done so quickly by having more and more people brought into the field to try these ideas out. The more people the more permutations of ideas.