Hacker News new | past | comments | ask | show | jobs | submit | MediumD's comments login

A lot of people saying the business model doesn't justify a $1bn valuation (rightfully so), but I'm guessing the valuation wasn't for their current business, but on the possibility that Cameo became the new way for booking talent in the age of the internet.

They could have become a $1bn business if they had "revolutionized talent management" (or something like that). Not saying it was a good investment, or one I would have made, but I'm guessing they pitched a larger vision than simply a buttload of cameos from washed-up/reality TV stars.


To be fair, I also didn’t include the session layer!

My writing isn’t a strength of mine, so I appreciate the criticism. My writing going from “bad” -> “is it AI?” is progress.

I struggled with where to “cutoff” the explanation and public key cryptography seemed like a good boundary and better explained elsewhere, as did various OSI layers.

I probably should have gone over the cert and potentially the full chain of trust, I’ll give you that.


While I agree no one is rewriting history, it is potentially a big deal because it speaks to the biases present when training/RLHF-ing. Considering this will be used by millions (if not tens of millions), calling it a “silly toy” feels off.

Bias in the model can lead to bad outcomes in certain situations (hint: we have an election coming up)

Yes this is innocuous, but it does hint at the possibility of more damaging bias being a possibility.


> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.

This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.


No, if you read this article it shows there were some issues with the way they tested.

> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.

> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs

Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.

https://andrewmayne.com/2023/11/14/is-the-reversal-curse-rea...


*Shameless Plug*

If you want to play around with OpenJourney (or any other fine-tuned StableDiffusion model). I made my own UI with a free tier at https://happyaccidents.ai/.

It supports all open-sourced fine-tuned models & loras and I recently added ControlNet.


I no longer work at Brex, but I believe they did this using Kotlin rather than Elixir.


thanks for the insight. either way, would be nice of them to either open source their solution (or if they are using someone else's point us to which one they picked)


Which headphone did you try? That sounds amazing


Can't speak for OP, but I recently bought a pair of Sony WH-1000XM4 headphones and I'm pretty impressed. They don't make me feel the pressure in my ear like some do, and the noise suppression is just about magic. My wife has to text me from downstairs to tell me my kids are fighting even when it's just outside the door to my office, because I can't hear them at all (and I don't turn up the volume on my music). Just turning them on without any music makes it hard to understand someone talking in the same room.


I have these same ones and AirPod Max as well. Both really amazing for focus, deep thought and traveling.


Which do you prefer? I’ve been rocking the Bose QC35 II’s for years but am wondering if in-ear solutions can compete now for commuting on foot (metro, buses, etc.)


I loved my XM3. They eventually broke, and I replaced them with the AirPod Max as I’m all-in on Apple gear. Returned those within a week. The weight, the size, and the ANC side effects made it unusable for all-day wear. Some people love them, but I was disappointed. I’m back to XM, with newfound appreciation.

You can try both for a week, and return the set that doesn’t fit into your life. Both Apple and Sony can handle a refund.


> traveling

For sure, that moment when you power them up on the airplane it's heavenly.


What's the magnitude of improvement over the standard option (bose quietcomfort)?


XM3, my understanding is that the noise cancelling performance is similar to the XM4.


AirPod Max noise cancellation is pretty stellar.


This looks cool! What would be the biggest motivation for using this over something like slateJS?


I tried to use slateJS initially, but I found the project to be slow (it was using ImmutableJS back then) and even though it claims it can support collaboration, it doesn't actually support it which is a deal breaker for me.


Slate.js, even after change to Immer is slow. IMHO (as a person who actively observes the development, sometimes participate in their Slack) the "performance" is not taken seriously in this project. In last few months they provided few PRs that improved few cases, while breaking others. I am impressed how many projects are using it [0], because it has problems to handle editing and pasting huge documents. I also see many PRs from community focusing on optimization but they are ignored, stalled or prematurely closed. It also does not handle IME properly, which is a major problem for many languages. However, I see maintainers started to be more active, so all problems I have mentioned might be fixed soon.

[0] For example Kitemaker https://blog.kitemaker.co/building-a-rich-text-editor-in-rea... (AFAIK they use v0.47).

[1] Edit: TinyMCE team is focused to build their editor based on Slate - https://www.tiny.cloud/blog/real-time-collaborative-editing-...


slate.js is react-only iirc, although I believe someone made a draft pr to make it work outside of it.


There is a fork (rewrite) in the Vue.js [0].

[0]: https://github.com/marsprince/slate-vue


I think this is one of those things where people overestimated how much things would change in the short-term, but will grossly underestimate how much they will change in the longterm.

5 years is a pretty short window in the grand scheme of things when talking about the adoption of technologies.


Gold has been in use for hundreds of years and yet it's still very volatile. If Bitcoin is digital gold that doesn't really solve the volatility problem.


I don't think people are arguing bitcoin will be less volatile than gold, but BTC is currently still an order of magnitude more volatile than gold.

Point being that even with gold's volatility it is still seen by pretty much everyone as a viable store of value, and I think BTC will be similar.


Without divulging any trade secrets, are there any research papers or topics you would recommend learning more about? I'm really interested in learning more about these FSO improvements.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: