Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's new "Orion" model reportedly shows small gains over GPT-4 (the-decoder.com)
50 points by tiime 2 days ago | hide | past | favorite | 81 comments





This doesn’t surprise me at all. I expect we’ll shift from a focus on the models to models being fairly commoditised and a focus on the UX, interaction model, and new use cases.

It seems clear to me that most value will come from little bits of LLM integrated into other applications. Code autocomplete is a good example of this, it’s valuable because it’s a key press away in the workflow we’re already in. Email editing clearly works best in an email client. Generating an image for a slide deck is most useful in the slide authoring application… and so on.

There’s plenty of product discovery to be done and value to be realised in this space.


The marketing hype around ever increasing improvements meeting the reality of an apparent ceiling looks like it might spell mass disillusionment. Dialing back the AI enthusiasm seems like a good bet at this point.

Products are still going to increasingly capitalize on existing capabilities, like you say, but I wonder if the "AI everywhere" phenomenon might end up being a turn-off for a significant number of customers.


It already is harming sales apparently, little to no of the reviews on the new iphone even mention AI even though they spent a lot of money marketing for it. It even seems to hurt its sales.

I wouldn't bet on a full-on ai wonter soon, but it starting to seem more and more likely the more hype companies are trying to build on it.


iPhone reviews don’t mention AI because : 1. They didn’t exist in iOS 18.0 at release. Can’t review it if it doesn’t exist 2. it still doesn’t exist in the majority of the world in iOS 18.1. If you are not in the US or UK, Apple Intelligence just doesn’t exist

> If you are not in the US or UK, Apple Intelligence just doesn’t exist

Technically, only the US is supported. If you are British and you try to enable it, you’ll be told that it’s not available in your language. To get it to work, you need to switch your device language from English (UK) to English (US).

Apple says “English (Australia, Canada, Ireland, New Zealand, South Africa, UK) language support available this December.”

Since access is controlled by your language setting and not your location, I don’t think this is a regulatory thing. No idea why it takes Apple months to localise from American English to any other type of English.


Cheked, it is True,I rarely see or hear about iphones, so i just heard they sold badly and made a connection.

Though why was that fact never mentioned in the adverts? Or did i just not notice?


I just had a look and though I can't use my iphone 13, apparently the m1 macbook should be ok, so I'm trying to update the os to 15.1 and give it a go. I can't say the features seem super exciting though:

Writing Tools

Clean Up in Photos

Create a Memory film in Photos

Natural language search in Photos

Reduce Interruptions Focus

...

I mean its nice but stuff that mostly seems out there on the web already. The auto turning photos into film is mostly something I have to spend time figuring how to turn off. We shall see. Sticking my phone into 'focus mode' without telling me was one of the worst things Apple has ever done to me. I missed important calls because I expected my phone to work as a phone as usual.


Yeah I have an iPhone 15 plus and I am in Bangalore, I tried installig the update, changed the language and everything but no Apple intelligence on my phone.

Pretty sure most Apple Intelligence features aren’t released yet and the few that are were released like 2 weeks ago.

I agree, and I'm baffled by how little we invest in UX for AI. Most companies just put a chat box and call it a day. Everyone seems to focus on gaining 0.1% in MMLU

There's still a ton of room left in short error-correcting RLHF or fine-tune cycles.

I expect there will be, at some point, "trusted" or "verified" accounts that are able to flag a completion as wrong, and provide it with a correct completion, which will be collated and used (daily? hourly?) to fine-tune the weights. Active inference cluster would be switched as A/B/C etc.


Yeah even if it isn't AGI, I still feel call centers are practically doomed at this point, and given time, these models can and will replace those people.

I also happened to know a copywriter who said she was laid off cause of ChatGPT.

It still is going to be a disruptive technology, though it is not perfect by any means.


Call centers have been doomed for some time now

So, OpenAI has created a whole new team very late in the process to explore methods of post-training and enhancing the output. Screams of desperation to me. It was supposed to launch on Azure perhaps already this month, and more widely in december. Now it is doing neither.

That's after immense resource consumption and cost from at least a year of training.

Is this when it happens? That we discover training with synthetic data only goes so far? We may not have the clear picture yet but the writing is appearing on the wall. In this case, the AI community may not have a clear path forward. This was supposed to be it.

For you investors sake, I hope you are watching. It really feels like we're moving into the next phase for the time being, a more mature one where we seek applications of AI rather than their evolution. I could already feel it earlier this year when more exciting stuff happened in small models for mobile devices.


> He said that the path to artificial general intelligence (AGI) is clear

Have they shifted the definition that much, or is there something they’re not telling? My impression is that, based on traditional definitions, no one has the foggiest notion of where to head to achieve it. And if he’s suggesting that “creative use of existing models” is a path to AGI… wow, that’s gotta be a useless definition of AGI.


“LLMs aren’t a path to AGI” is, like all theories about the progression of AI, at its core based on the philosophical musings of the subset of computer nerds that are unjustifiably confident enough to think that their area of expertise extends to something that truly nobody on the face of this earth is even close to understanding. OpenAI has just as much of an idea as the knowitall HN naysayers, which is to say ‘no idea at all’.

How is "the path to AGI is clear" any better? At least the nay sayers aren't making any fake promises they wont be able to keep.

The definition is shifting but in the opposite direction.

If you'd showed one of these models to an AI researcher in the 90s or 2000s they would have been jumping up and down shouting "that's AGI!"

As we've been able to play with these models, we as humans can tell that there's something missing, a je ne sais quoi of human intelligence. They can think, they can reason, sometimes on very technical subjects, but the reasoning is shallow and fragile.

But to say that we are nowhere near AGI is definitely an exaggeration. We are on the precipice.


>> He said that the path to artificial general intelligence (AGI) is clear

> Have they shifted the definition that much, or is there something they’re not telling? My impression is that, based on traditional definitions, no one has the foggiest notion of where to head to achieve it. And if he’s suggesting that “creative use of existing models” is a path to AGI… wow, that’s gotta be a useless definition of AGI.

It's likely they're taking cues from Musk's "Full Self-Driving" fiasco: make big promises, then when progress stalls, redefine the promises to match what you've built.


> based on traditional definitions

A lot of traditional definitions imply some degree of consciousness or sentience.

But AGI will ultimately be defined by capabilities, not similarity to humans.

If a GPT-style LLM meets or exceeds humans at a broad category of tasks, it doesn't matter that it doesn't "understand" the tokens it operates on. Same with self-driving systems or autonomous robots. It doesn't matter if the CNN doesn't have a conception of a "person" - if it can feed the identification vector into the control network, and the control network swerves to avoid the identified object at a rate equal to humans, then that's good enough.

Capability is what matters, nothing else. And right now, we have ALL the parts to do this, we just have to scale them up, and train them better, and connect them.


My guess is that they changed it from our definition of AGI to their own:

Artificial.

Good enough.

Intelligence.

Besides, most people i know that use AI say they would get much more out of it with smaller more specifically trained models than the all knowing one we have now, but that depends.


AGI is already here today, unironically. The first chatGPT easily qualifies.

https://www.noemamag.com/artificial-general-intelligence-is-...

What most people mean when they say AGI is ASI.


I disagree. ChatGPT is nowhere near AGI. I don't know anyone who confuses general intelligence with super intelligence.

How do you know what most people mean?

Update: I agree with the article that a reasonable amount of generality exists with chatGPT. I dispute the intelligence part of it. It cannot evaluate the truth or falsity of what it spouts out. This appears to be a fundamental limitation of LLMs.


There are as many definitions as there are opinions, so I think people who rave on about “this guy thinks we’re closer to AGI than I think we are” don’t introduce anything interesting to the conversation.

Then again, we as engineers are always the annoying nerds who get stuck in the details.


They likely have a high level plan to get to AGI, since this would help to give their research direction.

I suspect stage 1 is to get current models to assist with making AI breakthroughs.


> They likely have a high level plan to get to AGI, since this would help to give their research direction.

They probably have ideas and a research direction, but it's probably going too far to call whatever they have a plan to get to AGI, since no one knows how to get there. At best you could probably say they have a plan to search for a path to AGI.

> I suspect stage 1 is to get current models to assist with making AI breakthroughs.

That's sounds like a sci-fi trope.


It is just the kind of scifi trope that wouod get investors invessting.

OpenAI barely has any moat, I Seriously wonder why no one is talking about hoq much open weighted models are getting to the closes source ones, they need invesotr cash to build something that would let them stabd out, otherwise no one would bither paying for OpenAI API when you can just run your own model.


Assisting with breakthroughs isn't sci-fi, it's things like bouncing ideas off an AI or having it help with code.

It can do this to a limited degree. I did mean "help with" in a very practical, limited way.


I was thinking the way the LLMs work is quite similar to a human reading a lot of books and remembering the stuff in them be able to answer questions but not really thinking about it. GPT4 already read all the stuff and can answer back so it's hard to do much better there.

The next step would seem maybe to think about it, which o1 is maybe making a start on. If you think of humans learning say programing, they read the books and can answer questions but they have to do exercises, write code and so on which gives greater understanding which is maybe where AI needs to go next.

By the way the Altman Tan Nov 8th interview which is a source for most of the Altman mentions in the article is worth a view if you skip the boring bits https://youtu.be/xXCBz_8hM9w


It sure feels like everything after GPT-4 has just been an incremental improvement (putting aside multimodal capabilities and longer context windows) - GPT-4 Turbo, GPT-4o, o1.

GPT-3.5 with the ChatGPT front end was just such a massive improvement over everything that came before it, then 4 months later they release GPT-4 which was a significant improvement but more importantly gave the impression that progress was actually accelerating, then for the last 20 months we got incremental tweaks and new models that are a little better at certain tasks and a little worse at others.


GPT 3 released in 2020 though, 2.5 years before GPT 4, so 2.5 years to GPT 5 could be still same rate minus the RLHF breakthrough but maybe adding inference time additional reasoning of o1-preview plus more multimodal of 4o.

So, chatgpt 3.5 potentially was to 3 as 4o and o1-preview are to 4, with 5 still to come, but it definitely could hit a data wall and maybe they lost some of the pirated book and scientific paper and textbook sources after scrubbing for the lawsuits, but they also have a lot more user interaction data and mass amounts of uploaded data from users that may not exist on the web.

They are now ranked in the top 10 websites in the world or close and are popular especially among knowledge workers leaking all kinds of data to them.


Huge difference between the original GPT-3 text-davinci released in 2020 and the 3.5 version that powered ChatGPT when it was released in late 2022.

Data wall? If it had consumed everything already, then isn’t the problem the model, not the input?

Question is what is going to happen to all those revenue projections, and money that OpenAI has recevied if GPTx+1 is just a minor improvement over GPT4o/GPT4Turbo etc?

Even o1-preview hallunicates more than e.g. Claude Sonnet 3.5 before it and only slightly better than GPT-4o, according to the paper on OpenAI's own SimpleQA benchmark. This is precisely the problem o1 tried to tackle at great effort i.e. despite consuming far more resources as it tries to reason. So while o1 was an improvement in general quality, it is also a failure in terms of what OpenAI really wanted to see and it remains unclear what kind of future their reasoning models have. There's an emergent picture being painted of OpenAI that I'm not sure all investors are even on board on and seeing yet.

You can’t rule out a rabbit out of the hat for 5, but it sure seems like 4.0 was an inflection point and a good time to sell-out.

Yeah even Open AI said o1-preview was not going to be a replacement for 4o.

It is still exciting to see some breakthroughs in this field, but it isn't going to be like the early days of GPT-4 release.


They are already bleeding money from what I'm aware, so they either have to jack up prices so much they could make up for 5bn in losses, or Sam Altman manages to convince investors to contribute to another funding round.

Considering he was willing and Delusional enough to ask for 7T USD for AI chips, Im sure he would try.


My guess as the other commenters have mentioned they will start building integrations with CRMs, ready to deploy RAG apps for enterprise knowledge bases, etc. There is still money to be made. How much that I do not know. Since I have been using the GPT3 API for some work stuff since 2021, I remember that prices had dropped dramatically when GPT3.5Turbo came out. Now they are engaging in what I presume is price wars with Google and Anthropic. Already Anthropic has a higher price, even for its Haiku model for what they called "increased intelligence" but it doesn't beat 4o-mini in benchmarks.

While There is money to be made in further integrations, hell thats where i see most of thr priductivity increase from these tools coming from, OpenAI has already spent billions developing these tools, money which they have to pay back in some way soon.

This is also ignoring the giant elephant that is open models, soon enough models like Llama, would be able to match or even surpass what ChatGPT, by which point why would any sufficently large company pay for API when they can run their own model, especially when all those GPU used for training flood the market.

But then again, plenty of large comapnies still use aws, even when it makes no sense to go serverless, so they might have a market to capitalise on.

We live in interesting times for tech, moores law is dead, intel is falling, layoffs are everywhere...

I sure picked the best time to go to university for Computer Science T-T


Those GPUs also cost a bomb to run. LLMops isn't super easy, I am working with a large OEM manufacturer rn as a consultant and they are also experimenting internally with LLMs, but they have enough resources to run those models. I don't see smaller companies having enough resources to experiment with various models at scale like they are.

>I sure picked the best time to go to university for Computer Science T-T

Going to be a bit contrarian here – while it's true that jobs can be tough to find and layoffs are discouraging, I genuinely believe it's also one of the most exciting times to be in the CS field. The fact that we can actually talk with an "algorithm" is still quite bonkers to me cause I remember fiddling with RNNs and LSTMs just to predict the next word or two in a sentence. There are still ways that we can leverage it to make something really cool. Perplexity is one. Phind is another. Notions's AI integrations are great.

I graduated about four years ago and faced my own setbacks, including getting laid off from my first job. But despite those hurdles, I've managed to find my footing and am doing reasonably well now. Just hang in there champ.


>Those GPUs also cost a bomb to run. LLMops isn't super easy, I am working with a large OEM manufacturer rn as a consultant and they are also experimenting internally with LLMs, but they have enough resources to run those models. I don't see smaller companies having enough resources to experiment with various models at scale like they are.

True, I sorta conflated running llama on your pc with what large comapnies.

Not to mention how I was somewhat conflating chat-gpt the product with OpenAI the company, What i argues was that soon enough ChatGPT itself won't be that special when comparing it to open source models.

OpenAI the company is in the weird position of both having a moat, and yet drowning in it: They have a huge advantage in skilled experts, engineers, and know-how to get a first mover advantage, especially now that they are practically another subsidiary of microsoft.

But they also have the notable disadvantage of spending billions upon billions of dollars developing a model that in the end is little to no better than what one could get for free from the internet.

A small company with a few dozen specialist could present a comparable product at a fraction of the cost, simply by not having to pay back the cost of developing their own model.

I feel like OpenAI would end up in a weird place in soon, maybe something like a cloud provider for companies, usefull for smaller ones where brand recognition and reliability matter, but having to compete with more specialised companies offering a similar service using llama, And at some point large companies could just build their own servers with open-source LLM's with their own servers and their own teams, bypassing OpenAI entirely.

The biggest winner here is those new small AI consulting teams that didn't have to spend nearly as much on finetuning the models that are already made.

You probably know way more about these things than me, what do you think of this prediction?

It doesnt sound as terrible for developers as I first thought, though it pains me to see how many people quit/never went into software development due to the AI hype, we lost a third of our class from 2023, and I assune things are even worse in america/developed countries.


OpenAI's position is indeed paradoxical. They have a considerable lead in terms of expertise and infrastructure, yet that very advantage comes with the burden of their substantial development costs. A small, nimble company leveraging open-source models can undoubtedly provide some competitive pressure by offering similar capabilities at a lower cost.

Despite these challenges, I believe OpenAI has strategic avenues to sustain and grow. Their investments in integrations, enterprise solutions, and reinforcing the reliability and scalability of their models can maintain their edge. The trust and infrastructure they offer might still be appealing enough for many businesses to stick with them, similar to the AWS analogy you mentioned.

As for the job market and the future for developers, I see your point. The AI hype has indeed introduced some volatility. However, I am cautiously optimistic. The evolution of AI and its integration into various fields will eventually balance out, creating new opportunities even as it displaces others. I still believe we’re in a transformative period where mobility and adaptation within the CS field could lead to exciting new prospects.

To your last point, it’s indeed tough to see talented individuals shy away from software development due to the current uncertainties. However, I hope this phase will pass and those who remain will likely find themselves at the forefront of some groundbreaking developments. Let's hope our lord and saviour, J-Pow has many more rate cuts for us in the future.


Why does your reply feel like it was edited by an llm?

Still, feels like we are at risk of another AI winter soon, especially with such a huge bubble being built around it this time.

Here to hoping the US wont hit a recession bceause of it.


Is Orion scaled up gpt4 or is it a different "arch" (like o1)

This story only matters if we find out scaling doesn't work. If some new experiment on context/cot/long-term memory etc. doesn't work, that tells us nothing


If true, does this imply the end of the scaling laws? I'm guessing all these players were expecting a certain level of model improvement for their compute budget. If they're now disappointed, this suggests their estimates were off?

Or is this a disconnect between pertaining loss and downstream performance on real-world tasks?


Scaling laws are largely about data and parameters, compute mostly more indirectly and weakly.

If there's no extra data most scaling predictions start to look bad.

None of this is surprising. Every AI/ML researcher has done a back of the envelope estimate to see how much data there is and when this must stop. Me included. It's a common topic at conferences.


Am I understanding it correctly that the gist of your argument is that OpenAI, Google etc have run out of available data to train LLMs?

What's your estimate for how many tokens that represents?


I suspect it’s more the end of an avenue than the end of the road, but it sure seems like throwing more data and compute is hitting diminishing returns on existing methods.

It was never a "law", it was what companies selling models made their investors believe but the limits were obvious from the start.

Is the obstacle compute, or is it available training data?

At some point the LLMs are just redigesting their own vomit.


Very likely the training data.

Punctuated equilibrium

Yann LeCun has been arguing this for a while, and that without a 'world model' there would be a limitation. It is only a matter of time before this problem is solved I suspect (nothing empirical to support this).

From the amount of data each successive generation used (which grew many orders of magnitude each time) to the decreasing, logarithmic performance, it's quite clear the steam is running out on shoving more data into it. If one plots the data to performance graph, its horribly logarithmic. In another perspective, the ability of LLMs to transfer learning actually decreases exponentially the larger they and the data sets get. This fits into the how humans have to specialise in topics because the mental models of one field is very difficult to transfer to another.

If that's the case, why we aren't seeing yet specialized LLMs for say only JavaScript, or translating from english to portuguese, etc?

We are likely going to get there. Similar to the steam/combustion engines (and other core technologies like computers, wireless transmission etc) there's first a massive rush to increase the power of it, at the cost of efficiency and effectiveness for more niche use cases. Then it is specialised to various use cases with large improvements in efficiency and effectiveness. My own prediction for where most gains will now come is

1) Creating new "harnesses" for models that connect to various systems, APIs, frameworks, etc. While this sounds "trivial", a lot of gains can come from this. Similar to how the voice version of ChatGPT was (apparently) amazing, all you really had to do was create an additional voice to text layer and another text to voice layer.

2) Increasing specialisation of models. I predict over time that end user AI companies (e.g those that just use models and not develop them), will use more and more specialised models. The current, almost monolithic, system where every service from text summary to homework help is plugged into the same model will slowly change.


We kind of have, that's what fine tuning is trying to achieve.

We haven't seen wholesale specialised models yet because creating foundation models is expensive and difficult and the current highest ROI is to make a general model.


> to the decreasing, logarithmic performance

In what measure, loss? Loss can't go below 0 plus the inherent entropy in the text (other than that with overfitting it could reach nearer to 0, but not fully if it is next token and there are multiple same prefixes).

With respect to hallucinations 4 got incredibly better over 3


In intelligence/performance. It's admittedly a fuzzy notion. Most benchmarks will probably show decreasing gains between generations. Similar to time/space complexity, trying to debate about what performance/intelligence is will get into a million definitions, caveats and technicalities. But a relative comparison between inputs and outputs is gives us useful information.

The inputs - data, compute and parameters - going into training these models have grown by many orders of magnitude between each gen. There's a lot of fuzziness about how much better each gen has gotten, but clearly 4 is not many orders of magnitude better than 3 by any reasonable definition. This mental model isn't useful to say how good each gen is, but it is quite useful to see the trend and make long term predictions.


I just don't take any of the testing of these seriously anymore, OG GPT4 before the "GPT Store" and "Dev Day" this time last year still just felt way more powerful than anything on the market today.

3.5 Sonnet felt close but even that feels like it's being downgraded now.

Just feels like things are silently being quantized or downgraded in other ways to increase profitability once people jump on. Even trying same prompts that one shotted flawlessly pre-Dev Day 2023 now just fail for me on GPT.


This seems to be a prevalent problem that people bring up.

I remember there was a forum post about it that got a lot of flack, but maybe they were onto something.

Here it is: https://community.openai.com/t/did-chatgpt-4o-get-progressiv...


I’m going to laugh so hard if we finally find out that quantization really does hurt models, and all the claims of 99.9999% at 4bit or lower precision really were full of it!

Of all the gaslighting I see in Ml/AI, the idea that quantization is almost free is up there. I don’t buy it for one second at all, no matter how many charts you try to show me implying that the logprobs are 99% the same. Sure, maybe until you go to 10K tokens context window!


Its probably kind of like dementia, 99% of what they say is like before they had early dementia but its still a big problem.

None of these is GPT-5.

Also, intelligence isn't the only criteria. Updated knowledge also matters. Context length also matters for large problems that can't be divided.


Maybe they don't feel comfortable calling it GPT-5 if it's not much of an improvement over 4o.

I think the *undisclosed* issue with a conceptual true GPT-5 is that it will be too expensive to run. Most GPT-4o users won't be able to afford it, not now and not ever.

Open AI is already running a huge loss currently, and it isnt like o1 is cheap to run either.

I wonder what price these llm's can be run in order to be profitable,and whether just running your own model would be worth it.

Maybe it could be even cheaper if youre willing to fall behind on R&D, but keep in my mind that everytime openAI invented something, it was quickly copied by its competitors.


I think it will get cheaper when Nvidia has a lot of competition. Currently, Nvidia charges a huge markup, and there exists much room to optimize this expense. I believe OpenAI already has plans to move to other hardware, at least partially.

Google TPUs should theoretically be able to compete with Nvidia's H100, but for some reason they aren't seen as a practical alternative.

I agree that training LLMs will get cheaper, but it's likely that the compute bottleneck no longer limits LLM performance.


But why would you spend billions training an llm, when it barely shiws any improvement over the previous model.

Unless OpenAI is willing to do something desperate, the best they have right now is what llm are, and so the cost would be in maintaing them. If you already paid for a bunch of H100's to train, there is little incentive to move away unless you know TPU are going to be significantly cheaper to run, cheap enough to explain the new cost of buying them.

This is ignoring the giant bubble that has balooned out of AI hype, which if popped would be disastorous for the comapnies most invested in the industry. Nvidia has a P/E ratio of 60-70, if they dont get enough future growth to explain it, they could lose a third of their pricing if not more.


A lot of the top researchers are working on making LLMs more capable, so it's not impossible for new breakthroughs to occur, they might not just be as rapid paced as the last two years or so.

There's also lots of utility to be found with the best LLMs today. I'm working on something myself, and have seen others pushing the boundaries in hackathons and startups. So that's a lot of innovation and value that's definitely not a bubble.


The next big improvement would be infinite context and answer, keeping the same quality. I don't think anything besides this will be convincing. A real GPT5

This is not possible in the transformer architecture.

Could you elaborate a little bit here?

Not really, without explaining how the whole thing works.

There's a lot of resources... my favorite is https://karpathy.ai/zero-to-hero.html


My view is that LLM performance looks like bursts of improvement as breakthroughs happen, with lots of much smaller incremental improvements along the way.

What OpenAI were hoping for was AI stepping in to provide the breakthroughs. But for now it'll have to assist with incremental improvements only.

There's probably 50% disappointment from the AI optimists and 50% relief from the pessimists.


Another thing this paper mentions is how Open models like llama are quickly closing in on Closed source ones.

I wonder what kind of future llm have, since soon with a good enough GPU you can run your own model that is as good as the paid one(which i expect the price of to rise considerably once investment dries up)

One option is running ads i guess, though i question how that would even work.


That's why Meta for now has the best strategy (unfortunately?). Cheap LLMs to everyone = more content for them to show ads.

How woukd you even put ads into an llm, especillay an open weighted one?

Maybe thats why they are talking about a fully AI generated feed, despite how unpopular that is with users.

Dont have to pay tge content creators a share if you produce all the content, lol


Does Meta pay social media content creators outside of when bootstrapping reels against the tiktok threat? They are a middleman for some other payments taking a cut of them but not making the payment itself.

If you get a big profitable following you even start having to pay them for boosts to keep it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: