Hacker News new | past | comments | ask | show | jobs | submit | DidISayTooMuch's comments login

How can I fine tune these models for my use? Their docs isn't clear whether the Gemini models are fine tuneable.


Oh yes, just burn all the money so they can use it as a tax write-off?

Does anyone really know how these tax write-offs work? People just use the term without any clear understanding.

Meta like any sane company out there will probably use every rule to avoid (not evade) paying tax. Thats good for the shareholders. But this isn't one such measure, it's mostly R&D expenditure.


It works as a tax free investment. So long as it actually pays off in the future, they get to skip tax by putting it straight into R&D now instead of tomorrow. Another thing they did was sell hardware below cost, so the public kind of benefit - but at the same time FB potentially benefit by undercutting competition and gaining market share... One way of framing that last one might be "Deciding how their tax is spent so that it might benefit Facebook" - from that perspective it doesn't seem like a bad idea (purely from the business side, not public interest) to burn the tax yourself for a little more control.


You gotta burn money to make money. This much cash burn is required to build the next gen level of infrastructure. Its a big bet for the company, let's see how it pays out. Haters going to say its all in vain, but no one knows how the future will look. Stock at ATH btw.


> "Note: Meta began reporting Reality Labs as its own segment in Q4 2020."

RL has been burning money for a few years and similar rates, and this just happens to be the highest one at the moment. Q4 seems to always stand out as the largest loss each year. (According to the "Quarterly losses for Meta's Reality Labs" chart in the article).


How about we tax you at 90% and let the money goto into pockets of useless administrators instead of having you spend in your self interest?


Not sure if others have experienced this, but you tend to forget nouns (object names, names of people, places, etc) after getting hit with COVID. I googled about this and something called "anomic aphasia" came up.

I often caught myself saying "that thing" many times because I couldn't recall the names of even basic objects or street names, etc after getting hit with Omicron.

Just curious if others have experienced this as well.


Sounds like having children. My mum could never get our names right and I could never understand why. Had children, now I do exactly the same thing. Still don't really understand why though tbh.


Thanks for the tip. Predibase has support for Zephyr-7B, but I wonder if they offer the same price per 1k token for a fine-tuned version of Zephyr-7B? Most likely, they will ask me to get a dedicated instance for that, which is the same as together.ai.


Just checked out mystic.ai, it looks like you only pay for usage on any model and not idle time. Might actually fit my requirements.


Characters in book don't write their own dialogs, they are merely projections of the authors. But AI chatbots are non-deterministic and come up with dialog of their own. Of course, you can say they are trained on large corpus of texts and all they are doing are just predicting the next token in the series. But yet, why do they seem like they have distinct personalities? The prediction algorithm is working in such a way that it's creating a sense of personality. How is this happening?


> "But AI chatbots are non-deterministic"

I think you need to re-math that.. They're entirely and totally deterministic.

You and I can spin up the same model, seed it with the same seed and they'll produce exactly the same nonsense to the same prompts, every single time.


They have a seed, but lots of randomness is injected, and that makes them non-deterministic by definition.

Certain applications have different levels of randomness, but the chatbots are usually put on higher levels of randomness, especially the entertainment ones.

You can make something that produces the same output every time. This is more commonly used for code autocompletes. So if you try to chat with Copilot, you'll likely get something more deterministic.

ChatGPT is probably toned down more than most, to avoid it from going off the rails. It is a bit sad that the AI most people have access too is rather "caged". The "jailbreak" techniques actually show AI closer to what it's like. Saying ChatGPT is deterministic is similar to saying a human in a call center is deterministic; they're following a SOP but that's not what they're really like.


Can you expand on what further randomness is supposedly injected? Every single AI system which I have used has supported seeding, and seeding made very single one deterministic. I honestly don't know how you're supposed to have "randomness" in AI models if the RNG is seeded.

It's a different story if you use specific libraries like Xformers, but those introduce randomness as an artifact of their optimizations, not due to any magical non-seeded randomness.


I wouldn't waste my time, personally. Most God in the Gaps arguments trace back to intellectual cowards, in my experience.


It's entirely deterministic. "chatgpt" is not an entity, it's a process, it doesn't even exist in a temporal sense, the sentences it produces aren't thoughts, it's a systematic technique for producing voluble text.


The personality you say they have, happens in the head of the recipient, just like with a book.

This also happens with people, animals and even objects, abstract shapes (which is friendlier, the spiky triangle or the rounded one?) ... humans ascribe personality to all kinds of things. It's a baked-in feature.


Hmm. I think this guy is just looking for a payday. He was a contractor, not a full time employee.


Thanks for the link. Not sure why Hacker News doesn't want to hear the truth.


From the article: "An experiment that created a hybrid version of a bat coronavirus — one related to the virus that causes SARS (severe acute respiratory syndrome) — has triggered renewed debate over whether engineering lab variants of viruses with possible pandemic potential is worth the risks..."


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: