A lot of people saying the business model doesn't justify a $1bn valuation (rightfully so), but I'm guessing the valuation wasn't for their current business, but on the possibility that Cameo became the new way for booking talent in the age of the internet.
They could have become a $1bn business if they had "revolutionized talent management" (or something like that). Not saying it was a good investment, or one I would have made, but I'm guessing they pitched a larger vision than simply a buttload of cameos from washed-up/reality TV stars.
To be fair, I also didn’t include the session layer!
My writing isn’t a strength of mine, so I appreciate the criticism. My writing going from “bad” -> “is it AI?” is progress.
I struggled with where to “cutoff” the explanation and public key cryptography seemed like a good boundary and better explained elsewhere, as did various OSI layers.
I probably should have gone over the cert and potentially the full chain of trust, I’ll give you that.
While I agree no one is rewriting history, it is potentially a big deal because it speaks to the biases present when training/RLHF-ing. Considering this will be used by millions (if not tens of millions), calling it a “silly toy” feels off.
Bias in the model can lead to bad outcomes in certain situations (hint: we have an election coming up)
Yes this is innocuous, but it does hint at the possibility of more damaging bias being a possibility.
> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.
This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.
No, if you read this article it shows there were some issues with the way they tested.
> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.
> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs
Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.
If you want to play around with OpenJourney (or any other fine-tuned StableDiffusion model). I made my own UI with a free tier at https://happyaccidents.ai/.
It supports all open-sourced fine-tuned models & loras and I recently added ControlNet.
thanks for the insight. either way, would be nice of them to either open source their solution (or if they are using someone else's point us to which one they picked)
Can't speak for OP, but I recently bought a pair of Sony WH-1000XM4 headphones and I'm pretty impressed. They don't make me feel the pressure in my ear like some do, and the noise suppression is just about magic. My wife has to text me from downstairs to tell me my kids are fighting even when it's just outside the door to my office, because I can't hear them at all (and I don't turn up the volume on my music). Just turning them on without any music makes it hard to understand someone talking in the same room.
Which do you prefer? I’ve been rocking the Bose QC35 II’s for years but am wondering if in-ear solutions can compete now for commuting on foot (metro, buses, etc.)
I loved my XM3. They eventually broke, and I replaced them with the AirPod Max as I’m all-in on Apple gear. Returned those within a week. The weight, the size, and the ANC side effects made it unusable for all-day wear. Some people love them, but I was disappointed. I’m back to XM, with newfound appreciation.
You can try both for a week, and return the set that doesn’t fit into your life. Both Apple and Sony can handle a refund.
I tried to use slateJS initially, but I found the project to be slow (it was using ImmutableJS back then) and even though it claims it can support collaboration, it doesn't actually support it which is a deal breaker for me.
Slate.js, even after change to Immer is slow. IMHO (as a person who actively observes the development, sometimes participate in their Slack) the "performance" is not taken seriously in this project. In last few months they provided few PRs that improved few cases, while breaking others. I am impressed how many projects are using it [0], because it has problems to handle editing and pasting huge documents. I also see many PRs from community focusing on optimization but they are ignored, stalled or prematurely closed. It also does not handle IME properly, which is a major problem for many languages. However, I see maintainers started to be more active, so all problems I have mentioned might be fixed soon.
I think this is one of those things where people overestimated how much things would change in the short-term, but will grossly underestimate how much they will change in the longterm.
5 years is a pretty short window in the grand scheme of things when talking about the adoption of technologies.
Gold has been in use for hundreds of years and yet it's still very volatile. If Bitcoin is digital gold that doesn't really solve the volatility problem.
Without divulging any trade secrets, are there any research papers or topics you would recommend learning more about? I'm really interested in learning more about these FSO improvements.
They could have become a $1bn business if they had "revolutionized talent management" (or something like that). Not saying it was a good investment, or one I would have made, but I'm guessing they pitched a larger vision than simply a buttload of cameos from washed-up/reality TV stars.