This isn't really a problem in tool-assisted LLMs.
Use google AI studio with search grounding. Provides correct links and citations every time. Other companies have similar search modes, but you have to enable those settings if you want good results.
How would that ever work? The only thing you can do is continue to refine high quality data sets to train on. The rate of hallucination only trends downwards on the high end models as they improve in various ways.
The brightness enhancement film is a transparent optical film. It consists of a three-layer structure. The bottom layer of the light-incident surface needs to provide a certain degree of haze by back coating, the middle layer is a transparent PET substrate layer, and the upper layer of light-emitting cotton. It is a microprism structure. When the microprism layer passes through the fine prism structure of the surface layer, the light intensity distribution is controlled by refraction, total reflection, light accumulation, etc., and the light scattered by the light source is concentrated toward the front side, and the unused light outside the viewing angle passes through the light.
So, it's similar to your design, but the grooves are very small.
This is an easy question for LLMs to answer. Gemini 2.0 Flash-Lite can answer this in 0.8 seconds with a cost of 0.0028875 cents:
To habogink a hammer means to perform the action typically associated with its primary function, but in reverse. The primary function of a hammer is to drive nails. Therefore, the reverse of driving nails is removing nails.
So, to habogink a hammer would be the action of using the claw of the hammer to pull a nail out of a surface.
The goal wasn't to stump the LLM, but to see if it could take a completely novel linguistic token (habogink), understand its defined relationship to other concepts (reverse of primary function), and apply that abstract rule correctly to a specific instance (hammer).
The fact that it did this successfully, even if 'easily', suggests it's doing more than just predicting the statistically most likely next token based on prior sequences of 'hammer'. It had to process the definition and perform a conceptual mapping.
I think GP's point was that your proposed test is too easy for LLMs to tell us much about how they work. The "habogink" thing is a red herring, really, in practice you're simply asking what the opposite of driving nails into wood is. Which is a trivial question for an LLM to answer.
That said, you can teach an LLM as many new words for things as you want and it will use those words naturally, generalizing as needed. Which isn't really a surprise either, given that language is literally the thing that LLMs do best.
The AI OCR build into snipping tool in windows is better than tesseract, albeit more inconvenient than something like powertoys or Capture2Text, which use a quick shortcut.
You don't need Ren'Py-like software for graphic novels unless you want them to be interactive. Even so, it may be difficult for an 8-year-old to learn python and debug something like Ren'Py, especially because half of its use is entirely 18+.
I would recommend something like Scratch. It's a visual scripting language that allows kids to make amazing games and animations. It has all the capabilities necessary with ease and kid-friendly forums and comment sections.
I've been attempting WILD every few days whenever it's convenient. No luck so far, but it's not something I'm taking particularly seriously. For that reason, I'm still not sure whether it'd be worth doing a dream journal or risk disrupting more of my sleep.
WILD can be quite hard to achieve, I think. Doing MILD with dream journal works for me very well if I give it a bit time (not on the first night after a pause.)
Use google AI studio with search grounding. Provides correct links and citations every time. Other companies have similar search modes, but you have to enable those settings if you want good results.