This figure is sort of an overclaim imho. If you look inside the paper, the reported figure is 97 FPS actually (vs 135 FPS for 3DGS on their device). This 2400FPS they advertise is for a degraded version that completely ignores the transparency... but the transprency is both what makes these representation support interesting volumetric effects and what makes rendering challenging (because it requires sorting things). Drawing 1M triangles at 2400FPS on their hardware is probably just quite normal.
The puzzle assumes that the room temperature is greater than the cold milk's temperature. When I added that the room temperature is, say, -10 °C, Mercury fails to see the difference.
Under any reasonable assumptions for the size and shape of the cup, the amount of coffee, the makeup of the air, etc., the room being -10c won't change the result.
It would only matter if the air were able to cool the coffee to a temperature less than that of the milk in under 2 minutes.
Installing it in Edge makes Edge freak the fuck out about your default search engine being changed. It tried to force me 3 times to change my search engine back, one time saying it had already changed it for me to protect me.
Ctrl+F'd for Perplexity. I knew Google was cooked the minute Perplexity worked better for questions about an obscure embedded systems SDK. It has little documentation, but a lot of mailing list and github issues. Google spits out the front page of the project and shrugs; Perplexity actually answers the question. The usual caveats for LLM hallucination apply.
Same. Try the "books on the Battle of Midway" query on Perplexity. The results are great and include the book mentioned in the article (authored by the Naval Aviator).
The difference in the dates example seems right to me
20 October 2024 and 2024-20-10 are not the same.
Months in different locales can be written as yyyy-MM-dd. It can also be a catalog/reference number. So, it seems right that their embedding similarity is not perfectly aligned.
So, it's not a tokenizer problem. The text meant different things according to the LLM.
If you devote a couple hours a day to it you can acquire significant proficiency, just like with a classroom setting. But just like with a live course, merely participating and doing the homework will not be enough to achieve conversational proficiency. You need to do a lot of off-curriculum study/practice with a lot of different resources (some of them ideally real people with high linguistic competence).
Edit: I say this from a theoretical vantage as someone who studied applied linguistics in undergrad and from an anecdotal perspective as someone who did enough Duolingo over the span of a couple months to start reading news articles in Spanish with little need to consult translators/dictionaries but who still couldn’t navigate their way around a Latin American city. It took me a couple of weeks in Latin America to begin to communicate fluidly (still more time for conversationally) with locals despite the strong syntactic and broad semantic base heavy duo use afforded me.
I find it works better for languages with simple grammar rules Swedish/Danish/Norwegian
I was able to jump from Duolingo straight to young adult books, and became conversational from there.
Languages with complicated grammar rules, you don’t get the support you need for the grammar foundation and is only really helpful for learning vocabulary, not necessarily how to form sentences.
You need to have multiple sources of learning, not just relying on Duolingo
I don’t know anyone who got good solely from Duolingo but it could be a reasonable supplementary tool alongside others more effective for specific goals; I god a good start learning with Pimsleur (focus on speaking and listening) and Duolingo just helped add more vocabulary and some grammatical instruction alongside it.
I stopped using Duo almost a year ago after reaching a 2500 day streak, having repeatedly turned the whole tree (it was a tree not a path when I started) gold more than once, only for most of the lessons to be reset as more content was added.
I'd decided to stop using it before that point, and only continued to get a round number.
It kept giving me the illusion of knowledge, but when it came to actually trying to talk to native speakers I was getting perhaps three words in four, which isn't enough for any but the most basic of sentences.
My current apps of choice are Babbel and Clozemaster, the latter of which is perfect for commutes precisely because it doesn't use any stupid annoying animations or grating children's voices like Duolingo does, and therefore allows me to get into flow state.
I'm now up to recognising 29 words out of 30 in normal conversation (depending on accent and speed, of course), but wouldn't have gotten that far if I was still on Duo.
I tried Duolingo, but got better result with an audio course [0] (just a few pdfs was provided for grammar. It works better for me as we converse with ideas and intents, not isolated concepts. I learned how to articulate first, then picked up vocabulary as I go (books, movies, real world situations).
Github Copilot definitely improved my coding productivity. However, most of my time is spent thinking about the problem to be solved than the actual coding.