Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT for Teams (openai.com)
436 points by szermer 10 months ago | hide | past | favorite | 420 comments




I want ChatGPT for Family.

The free version gets a lot of use around here but the most powerful feature is the ability to search the web, which is only available to paid users. I pay $20/month for myself and I’d happily pay a bit more for the whole family, but not $20/month per person - it adds up. Family members end up asking to borrow my phone a lot to use it.

Give me a 3-4 person plan that costs $30-$40/month. You’re leaving money on the table!


At Kagi we plan to offer this for $20/mo and 6 family members included. You get both paid search (much better than openai bing) + AI (gpt-3.5-turbo/claude-instant). If you need gpt-4 it will be an optional $15/mo upgrade per family member.


This is not a comment on the Kagi service, but more a comment on transitions in general. I have tried Kagi and I think it's great. I really want to use Kagi. I want to support Kagi. I have a mental stickynote that says 'start using Kagi on everything'. Every time I sit down to do some tasks it just falls to the bottom of the to-do pile because I feel like there's so many devices I now need to go through and update. Google really has a powerful advantage by bundling search in with the browser product. Isn't that what got microsoft into anti-trust trouble? How is it allowed?


Start with your most used devices, I'm guessing your phone and your main computer. Then the others will slowly follow. Once I switched my phone's default search to Kagi, it made a huge difference because I search a lot more while being on the go and being able to get good, no-spam, no-ads filled search results can be an amazing experience.


I ran into this same issue. Finally took until Christmas break to have the mental space available. Switched to Firefox + Kagi, and very happy that I did.


I wasn't able to quickly find an answer: can you use it also in private browsing mode on Firefox? I basically only use private Firefox browsing both on mobile and desktop.


They give you a unique URL associated with your account that you can use without manually logging in. It does somewhat defeat the purpose of private browsing mode depending on what you're using it for though.


I've been playing around with the assistant stuff and adding !expert to my searches to see what the LLM spits out first as a quick check. I'd love if I could get my custom assistant to work - sounds like a lot of fun to be had there.


is gpt-3.5-turbo/claude-instant better than the model that free tier of chatgpt uses? FWIW, from my testing dolphin-2.5-mixtral-8x7b was clearly better than free tier chatgpt.


We are closely following but with price of the models going down so rapidly it is really hard to justify capital investement in infra/people to maintain own stack globally at production quality (at this moment, will probably make a difference at scale larger than where Kagi is today).


Nice, I'm looking forward to that! You guys have some pretty outstanding AI chops going. I've been really impressed!


out of curiosity, do you cap the trial accounts anyhow? First time I'm trying Kagi, and it's been one minute+ and it is still haven't started showing "Results" (when using `!code`). Just in case you do cap, probably not the best approach if you want to attract people!

update: it actually timed out with no result/explanation.


Code is the one I actually haven't tried. In general though they do cap trials but it's by number of requests, and it will either allow you or not, so that sounds like a bug


Sounds great. Will that plan also have access to the Search API which is currently restricted to Teams plans?


I haven't found the web search feature particularly useful or helpful. Far too many sites are blocking the ChatGPT bot. I also find that ChatGPT isn't getting any better search results that I would if I searched for something myself. Quality of the results varies a lot too, and ChatGPT doesn't really seem to be able to distinguish between high quality content and not so high quality content.

For software development I find that Phind is pretty good at combining search results with GPT-4 in a way that increases the quality of the result.

Maybe OpenAI can convince the Bing team to index everything using their embeddings. If ChatGPT could also read the text directly from Bind instead of having to "surf the web" it would be able to consume several search results at the same time. In the future I could even see Bing et al. running an LLM over all text when indexing a page to extract info and stats like a summary, keywords, truthfulness, usefulness, etc.


Same experience. 90% of the time I ask it to summarize something, it can't because it's blocked. At least it has the decency to tell me that it's blocked rather than just failing (which is what Kagi does. Love Kagi, but that's a minor improvement they could make).

This is where I suspect Bard is going to be an absolute beast of a product. Ability to quickly and thoroughly consume a bunch of hits and find the best and summarize and such is something uniquely able for Google (and increasingly, Kagi)


I feel that LLMs have the potential to reorganize the web. Instead of being ad sponsored, raw, high quality data will be priced and aggregated.


A workaround for this is to print the site as a pdf and upload it to GPT.


ChatGPT's web search is interminably slow and I've added to my custom prompt to not do web searches unless explicitly asked. However, I'd give Perplexity.ai a try - I've found it to be incredibly fast and useful (funnily enough, they largely also use Bing for search retrieval results) and if you pay for Pro (which I do now), you can also use GPT-4 for generating responses.


I also had some good experience woth default free version of phind. I was facing a issue in a python framework, which turned out to be a bug. Phind was able to pinpoint the github discussion where the issue was raised and also suggested workaround code example based on the github issue. No other free AI tools were able to do this.


I have a custom GPT for telling my 3 year old bedtime stories. It's super cute to listen to the two of them collaborate back and forth where my kid will add new characters (friends from school, or stuffed animals) and new wacky twists to their adventures, and the storyteller GPT will come up with a new revised version.

It would be pretty rad if she could just have the app on her tablet with a family plan. She doesn't use it quite enough to justify getting her own subscription, but especially if we could share GPTs across devices, so she gets the ones I make for her, but doesn't get flooded with my work or research related GPTs.


Oh, how does your 3 year old interact with GPT?

BTW. I read once some person made automated generation of bed time stories (with childrens as the main characters) for his children using open AI API and speakers - I was quite amazed (not a thing I would do, but nice usage for gpt).


There's pretty good voice input for chat gpt built into the iOS app.


The ChatGPT app has a button for voice2voice.


ummm how do i get this? i've got a 5, 3, and 1 year old and would love this


We just do it from the ChatGPT app on Android.


I'm certain that they will soon release anything that promises more subscriptions. ChatGPT for Family, ChatGPT for Gov, and so on...


ChatGPT for Kids™


“The Diamond Age: Or, A Young Lady's Illustrated Primer”

https://en.m.wikipedia.org/wiki/The_Diamond_Age


The more I think of it the more I realize how enormous an impact this could have


Can't wait for my 5 year old to start a sentence with "However, it's important to note that ..." ^^


If he starts with “As an AI language model…” then you should really start to worry


You mean the children grow up loving the GPT more than their parents?


They already love their TV, GameBoy and smart phone more than their parents.

(Deliberately using anachronistic examples here. The issue has been with us for a while.)


Hit me up so we can work on it :)


There seems to be a plethora of somewhat ChatGPT competitive alternatives that does search the web at this point though. Maybe try phind.com?

(Although I haven't yet myself tried any alternative that is clearly on par with ChatGPT 4)


Can't you use the same account on multiple phones though? I thought this is a no brainer.


This is probably correct but I'd prefer that family don't read the conversations I've had, as even if I'm not saying anything too private, it feels too intrusive (it'd be a bit like reading my inner thoughts).


It's interesting that you're so trusting of strangers knowing your inner thoughts (OpenAI) but not your family


How could I look my wife in the eye, or expect my kids to grow up happy, if they knew their dad doesn't know how to use a regex to detect emojis in a string?


I hope there is more going on behind those eyes than not being a regex expert


I don't want my family to know I spent 3 hours chatting about the Holy Roman Empire.


What would change if they knew?


Why the questions? It is no one else's business why they want that level of privacy. Is it your intent to convince them that they don't need privacy?


> Is it your intent to convince them that they don't need privacy?

Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.


> Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.

I'm not OP, but I think you're missing the point.

Privacy and trust isn't really a 1D gradient, it's probably planar or even spatial if anything.

Personally I'd be more willing to trust OpenAI with certain conversations because the blowback if it leaves their control is different than if I have that same conversation with my best friend and it leaves my best friend's control. The same premise underlies how patients can choose who to disclose their own health matters to, or choose who their providers can disclose to.

Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.


> Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.

I ask that you consider the people who use Reddit and the people who run Reddit independently. The people who use Reddit are not in a position of power over someone who asks for advice. The people who run Reddit on the other hand, are in a position of power to be able to emotionally manipulate the person who asked for advice. They can show you emotionally manipulative posts to keep your attention for longer. They can promote your post among people who are likely to respond in ways that keep you coming back.

OpenAI has a similar position of power. That's why you shouldn't trust people at either of those companies with your private thoughts.


You're assuming power comes with an assumed guarantee of use. OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model (I'm speaking using security language, but I doubt most people go so far as to threat model these interactions; it's mostly intuitive at this point).

Your family has limited power in the grand scheme of things, but the likelihood that they may leverage what power you give them over you is much higher.

The IRS has vast power and is likely to use it against you, hence why tax fraud is usually a bad idea.

Hence "planar" rather than linear.


> OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model

I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.

Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.


> I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.

> Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.

Yeah but at this point you're identifying individual risks and grasping at straws to justify manipulating* everyone's threat model. You can use that as your own justification, but everyone manages their own personal tolerance for different categories of risks differently.

*Also, considering the published definition of manipulation is "to control or play upon by artful, unfair, or insidious means especially to one's own advantage," I think saying that "OpenAI is doing individualized manipulation against everyone who uses their systems" is an overreach that requires strong evidence. It's one thing if companies use dark UX patterns to encourage product spend, but I don't believe (from what I know) that OpenAI is at a point where they can intake the necessary data both from past prompt history and from other sites to do the personalized, individualized manipulation across future prompts and responses that you're suggesting they're likely doing.

Considering your latest comment, I'm not sure this discussion is receiving the good faith it deserves anymore. We can part ways, it's fine.


Too much discussion about the Holy Roman Empire over dinner? People talk to get things of their mind sometimes, not the infinite pursuit of conversation.


My point was not that they should talk about the Holy Roman Empire with their family, but that they shouldn't share information with strangers that they wouldn't share with their family.

If you don't want your family to know something, you shouldn't tell it to OpenAI either.


> If you don't want your family to know something, you shouldn't tell it to OpenAI either.

Yeah, I think this is an over reduction of personal privacy models, but can you tell me why you believe this?


The reason you wouldn't say something to someone is because you are afraid of the power that you give people along with that knowledge.

Your family is in a a position of power, which is why it can be scary to share information with them. People at OpenAI are also at a position of power, but people who use their services seem to forget that, since they're talking to them through a computer that automatically responds.


Converging threads here: https://news.ycombinator.com/item?id=38956734

tldr: power (or if you want, impact) is the linear dimension, likelihood adds a second dimension to the plane of trust.


In practice, likelyhood directly correlates with power. Perhaps there is causation (power corrupts?)


Some people are more responsible than others.

For example, one's spouse typically has a lot of power, but hopefully a low likelihood in practice.


I need data on that. I haven't seen that in practice.


This is silly. It's not like OpenAI is going to find your family's contact info, then personally contact them and show them what you've been talking about with ChatGPT. It's just like another post here comparing this to writing a post on /r/relationshipadvice with very personal relationship details: the family members are extremely unlikely to ever see that post, the post is under a pseudonym (and probably a throwaway too), and the likelihood that someone is going to figure out the identity of the poster and seek out their family members to show the post to them is astronomical.


They would know that it was neither holy, nor Roman, nor an empire. Discuss.


Is that truly interesting? OP does not have to care about what AI think of him. OP does notnhave to care about accidentally offending or hurting AI either. Open does nor have to care about whether AI finds him silly or whatever.

Normal humans care about all of those with their families.


AI is a tool controlled by people. In this case, people who are not OP.


So? That doesn't invalidate the point of the comment you replied to.

To give another example: The cashier at the supermarket knows when I'm buying condoms, but that doesn't mean I want to tell my parents.

And neither would I want to know as a parent, when or whether my kids order bondage gear on Amazon.

It's not just about my information going to other people, but also keeping certain information of other people from reaching me.


Fine then. You don't want to find out your family the love you have for Roman empire. But you are a programmer, yes? So make an app that's just a wrapper for ChatGPT API's you're paying for and distribute that to your family phones. They'll use your OpenAI API key and each will have their own little ChatGPT 4 to query to. Have fun.


As a general rule, I don’t share account access. I can count on one hand the number of times I’ve made an exception to that rule and it was always for something relatively benign like Spotify. Privacy isn’t the only reason to avoid sharing, either.

I don’t even like that when my family picks up the remote, Apple TV assumes it’s me using the TV. They watch something and mess up my Up Next history and recommendations. I wish it supported using a PIN. I’ve thought about getting rid of the remote to force everyone to use their phone as a remote, because then it detects who is using it and automatically switches accounts. But that means everyone has to have an iPhone and have their phone charged, etc. Getting rid of the remote just for my convenience seems too inconsiderate.


Can't you just share the login details?


Sharing the same space and turning off/on the custom instructions is also very annoying.


Agreed on the most powerful feature is the ability to search the web. This feature single-handedly makes ChatGPT a very potent Google search alternative but without the dreaded advertisements.


Bing's version can search the web.


I guarantee you they aren't leaving money on the table. they're running the same techno capitalist playbook.

they want you hooked on apps, API, etc, before the real costs are brought in. they likely should be charging anywhere from 50-100$ depending on hours


A notable feature here is "no training on your business data or conversations" which really shouldn't have to be a feature. (requests using the ChatGPT API already aren't trained upon)


Similarly you can opt your individual account out on the ChatGPT side. [0] Although by default they do seem to use both vanilla and Plus conversations for training.

[0] https://privacy.openai.com/policies


If you opt out you lose access to basically every feature you're paying for. No conversation history, no access to plugins, etc.


Are you sure about that? Parent was not referring to the "APP -> Data controls -> Chat History & Training" Radio Button which would delete your history older than 30 days, but rather to a specific form you can use to opt out.

Coincidentally I just used this form yesterday and got the confirmation about opting out.


Edit: This was previously a comment where I said I thought it did no longer show history. As it turns out the website version of chatGPT is just having a ton of issues. So it is very slow to load and when it does finally seems to load the sidebar doesn't show any content. After refreshing a few times the sidebar does show up.

So if people are going through the steps now they might indeed think that they no longer have access to advanced features.


I don’t agree with you here. OpenAI should be free to train on your data assuming you agreed to that in the terms of service (and yes, you did). If they ask for a little more money in exchange for not having that valuable information in trade, that seems fair.

If you want an entirely free and open LLM experience, you can also run one of the ever-improving open source models on your own hardware. But for many if not most companies, paying $25/mo per user for something as amazing as ChatGPT-4 is a bargain.


> I don’t agree with you here. OpenAI should be free to train on your data assuming you agreed to that in the terms of service (and yes, you did). If they ask for a little more money in exchange for not having that valuable information in trade, that seems fair.

Yea, another way to word it would be to imagine that they _only_ had a more expensive "no train" option. Now ask if it would be okay to offer a lower priced but yes-train version.


This is ChatGPT, not OpenAI's API with the gpt4 models. This is allowing your team to use chat.openai.com together, rather than having to build or deploy your own with the API.


Yes but even in ChatGPT the training is not done ?


Be default they train on user data and cripple the product in various ways if you opt out.


For some time I thought the primary way to opt out was to disable chat history. That made it impossible to use custom instructions. But I later found the account-level opt-out request, which, assuming they respect the weakly-worded “ask us not to train on your data” language, preserves the product’s full set of features:

https://privacy.openai.com/policies


There is literally nothing on that page (nor in the linked help article that describes how to export data) that says how to tell them to "not use my data for training" and not loose history at the least. If you know more about this feature please do tell.


> 'There is literally nothing on that page that says how to tell them to "not use my data for training"...'

You can opt out of ChatGPT using your conversations for future training, without disabling any features like convo history.

Either some browser plug-in like an adblock is hiding the button from you, or you're not noticing and clicking it (I'm guessing the former [1]).

For me, on iPhone, there's a black button with white text "Make a Privacy Request" which sort of hovers bottom centre of the page the way "Chat Live with Support!" buttons often hover.

Click on that button to get to this - https://privacy.openai.com/policies?modal=take-control - which allows you to either delete your account, or:

"I would like to: Step 1 of 2 Do not train on my content Ask us to stop training on your content"

They then tell you it applies to content going forward, not stuff already done. But that's the opt out that doesn't require losing ChatGPT conversation history.

[1] On iOS Safari with 1Blocker enabled, I could see the button without it being hidden as an annoyance or widget or whatever, however when I tried entering email for opting out to check it still works it gave me an error message that suggested adblock type things might be the issue. I opened the page in Firefox for iOS (so same browser engine as Safari, but without 1Block) and it worked with no error message.


Never knew about this link, I have requested not to train on my data but can we even confirm if they will honor it?


In addition to the usual risk any company has with breaking the law that a whistleblower might be brave and speak up, it could also come out in discovery during a court case, of which there are likely to be quite a few brought against them regarding what data they train on.

The benefit of training on data of people they've explicitly agreed not to train on (which is probably a very small % of even paying users yet alone free ones) is unlikely to be worth the risks. They'd be more likely just to not offer the opt-out agreement option.

But ultimately, we often can't know if people or companies will or won't honour agreements, just like when we share a scam of a passport with a bank we can't be sure they aren't passing that scan onto an identity-theft crime ring. Or course, reputation matters, and while banks often have a shit reputation they generally don't have a reputation for doing that sort of thing.

OpenAI have been building up a bit of a shit reputation among many people, and have even got a reputation for training on data that other people don't believe they should have the right to train on, so that won't help get people to trust them (as demonstrated by you asking the question), but personally I still think they're not likely to cross the line of training on data they've explicitly agreed with a customer not to use.


?

Roark66 gave a better answer, but...

On desktop I clicked the link and immediately saw "Make A Privacy Request" top right (where login / account / menu buttons might be)

I must ask did you honestly just miss this UI element or do you think it might be some confirmation bias that you already had?


I also breaks the speak to chat in the app if you turn history off.


For some reason “Confidentiality Tax for Small Business” has less of a ring to it than “Teams”.


Does anyone really trust openai isn't training on their data given their views on copyright?

It would make more sense for them to just train on it anyway.


My personal guess is that they don't put it in training data, BUT they still have human read what you send, to see 1. what are the innovative uses that they can try to copy/integrate; 2. optimize (both in score and throughput) for their consumer's usage.


if there EULA says one thing and they do another that opens them up to huge liabilities. though anything is possible


Well "huge liabilities"... until recently I haven't seen anything resembling something a company wouldn't just say "pay the fine and do it anyway, it'll be better for our bottom line".


Google trains on all your searches. Why is OpenAI held to a higher standard?


Google should not be allowed to do that either.


OpenAI's product might just print out the info you put in verbatim to another user if asked politely


Note that "no training on your data" is only for Team and Enterprise: https://openai.com/chatgpt/pricing


You can make a privacy request for OpenAI to not train on your data here: https://privacy.openai.com/

Alternatively, you could also use your own UI/API token (API calls aren't trained on). Chatbot UI just got a major update released and has nice things like folders, and chat search: https://github.com/mckaywrigley/chatbot-ui


It should be opt out by default: not opt in.


Until people decide to boycott services where privacy is opt-in, nothing will change.


Maybe the EU should fine them like a billion dollars a month until it becomes opt-in.


Because the EU knows better about this kinds of things, then the people who build and use these?

Just like the EU knows better about what chargers people should use than customers and engineers? Such wise bureaucrats!


That comment was immediately downvoted so there isn’t much hope.


no training on API as well. I integrated it with Telegram over a year ago. For convenience rather than for cost savings. Been paying $2 per month on average ever since. And "No training on your data" is included.


The API is not used for training purposes either. https://openai.com/enterprise-privacy


I suspect that user data isn't really valuable for training from anyway - the data will be full of users lying to the bot to try to manipulate it.

But "we won't train from your data" is a powerful marketing line, and differentiator between classes of customer, even if they have no intention to train from the data of anyone.


A major change is that you cannot opt out from having your conversations used for training unless you are usig a team account which is pretty costly for a single person.


According to this, you can still opt out of training, but you have to turn off history: https://help.openai.com/en/articles/7730893-data-controls-fa...


That's been true for at least a month, not new with (though it may have been in anticipation of) teams support.


Sneaky buggers


This link lets individuals opt out: https://privacy.openai.com/policies


OpenAI understand their tech lead isn't a sustainable moat, so are going for network effects. Similar to Slack Connect (shared channels).


I heard the no moat theory before and I don't get it. The open source models are about a year or two behind the latest ChatGPT in terms of quality. That means companies will always be willing to pay premium to use ChatGPT and not rely on open source. So even if/when Google and Apple (and perhaps Meta) catch up in terms of A.I quality, there's still so much money to be made for OpenAI. One interesting by product of late game capitalism like this is as more and more jobs get destroyed due to A.I, so will subscriptions. So it might be a mixed bag in the end for the tech giants if there's no real economy to buy the products anymore, but we're a long way from there.


I think no moat vs moat is a false dichotomy. They have a moat (better researchers and data) and are about to make it even better (network effects).


It was on researcher's opinion at a competing company, and everyone treated it as fact.


Yeah, this is something I've been saying as well. Their true "moat" is their network of people who know and understand how to know use their tech.


It’s the “we will make this so easy for you that you never want to switch” moat. Definitely akin to Slack, which also has the integration glue to keep you on their platform. Even though there are many Slack alternatives now that are really great, most companies on Slack will opt to stay there rather than invest in migrating.


Adjacent question, leaving aside value proposition. Do companies pay for 1000 seats like this? I didn't realize slack is $5 a user a month. Do they discount this for bulk, or are companies paying $5k/month $60k/yearly? These subscriptions must really add up.

On All In, they discussed the leverage from AI tools and they probably also meant open source, but one of the companies just rolled their own instance of a big monthly SaaS product because it was such a big expense for the startup.


I'm not really in the know, but I bet the enterprise discounts don't kick in until you're at the tens of thousands of users. In any case, $60k sounds like a lot as a top-line figure to some bean-counter, but all these sales pitches follow the same basic pattern:

- This is an essential, best-in-class tool. You wouldn't deny your employees a laptop or a free lunch, would you?

- $5/user/mo is a bargain compared to the hassle of building/hosting this yourself, punching holes in your firewall every time you need to receive a webhook, dealing with security and auth issues.

- $60k is half the cost of someone you don't need to hire on your in-house IT team. Does it make sense yet?


> I bet the enterprise discounts don't kick in until you're at the tens of thousands of users

I'll take that bet ;) Not really sure about OpenAI, but you can absolutely negotiate with almost any company.


This is why the price of "Enterprise" level of SaaS is always "Contact Us". Contract deals (i.e. "lock-in") are negotiated behind the scenes.


you'd be amazed at how many startups waste 100s of thousands(and millions) of dollars on buying seats for tools that barely anybody uses. Interest rate increases have made VC startups get a little smarter, but a few years ago it was really bad. Similar to how tons of startups burn huge amounts of money on AWS due to laziness


Yeah, companies really do. Once a year our company gets a really large bill (15k users, several services).

The thing is those same people need to be paid, and that’s a much (100x) larger bill, so the extra amount doesn’t really signify.


Ok, so there are now 2 tiers where they don't use our data to train the model?

The higher bandwidth is to clearly entice new customers, but the question remains, what happens to the old ChatGPT Plus users? Do their quotas get eaten up by these new teams?


Looks like the $20/month PLUS plan DOES use your data to train the model now... (they seem to have removed that "feature" from the list in the side-by-side comparison)


Currently, if you disable chat history, you'll see this message:

Chat History is off for this browser. When history is turned off, new chats on this browser won't appear in your history on any of your devices, be used to train our models, or stored for longer than 30 days. This setting does not sync across browsers or devices.


It's absolutely insane to trust that they won't do this.


No it's not. If they explicitly say they won't train on your data and then they do, it's going to come out in discovery of one of the lawsuits they're fighting, and the consequences would be significant. Plus there's little incentive for them to lie about it, given most people leave history on.


yeah because no large tech company has ever lied to their customers about how their data is being handled. oh wait there are lawsuits surrounding this sort of thing all the time.


I wouldn't trust them with nuclear secrets, but to say it's "insane" to trust that they're going to do what they explicitly say they're going to do just isn't logical.


https://privacy.openai.com/policies

They hide this link a bit. They completed my opt-out request in about ten minutes and at least claim to be not using any of my data going forward for training.

I didn't lose any features like Chat History


AFAIK, Plus has always trained on your conversation data. Enterprise and the API do not.


There used to be a form you could submit asking them not to train on your data. Absent some communication to the contrary I would hope that continues to be respected.


It's not super obvious, but even with Plus you can opt-out of training.

Aside: If you can see other colleagues' interactions with the custom/private GPTs, it could be quite an efficient way to share knowledge, especially for people in disparate time zones.


> what happens to the old ChatGPT Plus users? Do their quotas get eaten up by these new teams?

This is probably run on Microsoft servers (Azure, basically), not OpenAI servers, so it shouldn't directly compete for capacity. This is more of a "the pie got bigger" situation.


I can see some good use cases - A custom gpt just trained on your code base can help you write test cases in your desired syntax. - A custom gpt trained on internal PRDs can help brainstorm better on the next set of features.

Hoping to see something good come out of this


This version of teams does not do that. You can hook it up by creating Custom GPTs and add some amount of docs to a specific GPT for retrieval, but you cannot connect an entire codebase to ChatGPT to get answers. Github[1] had introduced the feature you are talking about a year or so ago. Not sure if people are using it.

Use cases I see are common ones - basic usage of ChatGPT but admin can control access. Provides ability for companies to bill directly instead of reimbursements, and have more control over it. HR docs and policies can be a separate GPT. Though nothing which requires multi level access control.

[1]: https://githubnext.com/projects/copilot-view/


Github's feature is under private beta right now. I feel that it will be impactful.

UI components can be generated as per your UI guidelines, same for tests. Hoping for good things


I'm not too suprised by the move, it's a classic segmentation steategy but I was surprised how poorly the example screenshots they gave reflect on the product.

You have one non actionable marketing answer, a growth graph created without axis (what are people going to do with that?) and a Python file which would be easier just to run to get the error.

That kind of reinforce my belief that those AI tools aren't without their learning curves despite being in plain English.


Here’s an idea - ChatGPT app for Apple Carplay. Right now while driving i often do “hey siri” - but instead of carrying on a conversation where I can ask clarifying questions, I am most greeted with “i cannot show you this information while driving”, because rather than summarizing the answer, Siri tries to show me some website link.


“No training on your business data or conversations”

Does this mean they will still use your data for other non-training purposes?


Yes. They will use your data as input to the GPT model to deliver the reponse you have requested.


Appreciate the funny response but that is obviously not the intention of my question


maybe we will see "only human eyes can see your data" vs "no automated tools can see your data" in the future


I mean, how else can you actually get a result without using your actual data...?


100 messages / 3 hrs, with a 32k context window. That's really cost effective and efficient for my use case!

Does anyone know if this applies to voice conversations? This is me while I'm driving: upload big PDF -> talk to GPT: "Ok, read to me the study/book/article word for word."

Good job OpenAI.


I'm confused -- wasn't ChatGPT upgraded to 128k tokens at their last release? Or was that just the API?


Just the API.


Why would you need AI to read a document word-for-word. That can be done already in various tools without needing to go through ChatGPT?


Could you direct me to a tool that is privacy preserving in Android (I consider my OpenAI privacy-friendly), and has such a quality of speech?


GPT-4-1106-preview aka turbo has 128k tokens. Are they saying GPT-4 (0613) with 32k is better than GPT-4 Turbo?


> 100 messages / 3 hrs

Sorry where do you see that? I only see "higher usage limits"?


https://help.openai.com/en/articles/8801707-what-is-the-mess...

That article doesn't say 100.

100 is what I read in the openai forums earlier today.


The way they purposefully made the Enterprise Plan so much better than the Teams plan is genius, the pressure on Enterprises to "just do the right thing" is pretty heavy here, I'd bet this will make them more than billion before the year is over.


It's not better. It starts with "Call Us" so no matter what it includes, it's worse.


Typical VC filler, this is a sad day for Open AI (space emphasis)


Did anyone evaluate this compared to using api access through an external gui (i.e. continue.dev). For software dev did the cost end up higher? I am thinking this is can be more convenient (and I suppose engineers can more easily use it outside work as a perk). Given practical use across a team will vary you get a lower price when using api and perhaps additional opportunity for scripted use.


I think team is a good add to strength the product vision. I just hope it can connect to Notion so that we don't need to re import all the data


RIP to all the startups this just killed.


Not your land, not your farm.


Our team has an Enterprise account, but individuals cannot access GPT-4 through the chat.openai.com interface. With teams, do individuals get access to GPT-4 through that interface? Is our account just broken somehow?

It seems odd we have enterprise but cannot access GPT-4 through the main ChatGPT interface.


Do you have a ChatGPT Enterprise subscription purchased via our sales team? Or are you an API customer?

The former should have GPT-4 access; if not, that’s a bug, and I can look into it if you email me at atty@openai.com.

The API and ChatGPT are separate products, and usage or credits purchased for the API do not provide paid ChatGPT access.


I'd love if I could use both my users at the same time to ask 2x questions.

My wife uses chatgpt only a few times a day.

I guess I need to 2x my browsers. I don't think this would work on the phone because I believe I need my browser open for chatgpt to continue its computations.


they don't want you to be able to communicate with your teammates without their knowledge



I think assistants / agents are going to be the big thing this year.

I was working on something at the end November that was proposing competent PRs based upon request for work in a GH issue. I was about halfway through the first iteration of a prompt role that can review, approve and merge these PRs. End goal being a fully autonomous software factory wherein the humans simply communicate via GH issue threads. Will probably be back on this project by mid February or so. Really looking forward to it.

Bigger, more useful context is all I think I really want at this point. The other primitives can be built pretty quickly on top of next token prediction once you know the recipes.


Pricing:

$25 per user/month billed annually

$30 per user/month billed monthly


Why would they make it more expensive than individual plans?


It’s marked up so they can offer a “special” discount on the sales call, and the customer can report back to their superiors that they “negotiated” a better deal and saved their company $X.


ENTERPRISE baby. Business tiers are almost universally more expensive than individual tiers.


The delta is the price for disallowing training on chats while retaining all functionality of the web interface and app (e.g., Voice chat).


1 point by phantomathkg 0 minutes ago | parent | next | edit | delete [–]

https://openai.com/chatgpt/pricing It is very clear on the highlight.

* Higher message cap. * Create and Share GPTs within workspace. * Admin console. * No training.


I bet they will prioritize this traffic as well.


no training on the data; more requests per hour.


So basically only companies don’t get spied on now? Even paid accounts are subject to data collection by default?


Sounds like an upgraded Plus with privacy, so 30$ for additional privacy compared to Plus.


Are there any new OpenAI opt in links that we might be missing?

Last one I remember was OpenAI GPT-4 API


no


no collaboration though, for actual collaborative team spaces give vello.ai a try


yeah, also https://team-gpt.com costs much less and provides a real AI collaboration workspace


The sooner we build a tool to filter out chatgpt generated garbage the better.


> The sooner we build a tool to filter out chatgpt generated garbage the better.

The sooner we build a tool to filter out garbage the better.

FTFY


We can’t filter out human generated garbage. Not sure how AI will be easier.


maybe we can ask the AI to filter it


dangGPT


Maybe we extend UTF so characters can have an AI generated flag.

People could work around it but it might help


The pattern in the footer of Open AI pages is very annoying, unintelligent.


I thought teams would be cheaper than individual.


Have you ever seen an enterprise plan cheaper than the individual plans (Which are often free)?

Now normal software is priced to squeeze as much money as you can, enterprises can afford more, so are charged more. Individuals are highly price sensitive, so has to be very cheap.

GenAI is quite different in that its not 0 marginal cost, the marginal costs are probably at least 50% of the price. So the price difference between enterprise and individual plans will be far smaller than usual, due to the common cost base.


No, per-seat cost for these kinds of things is ALWAYS cheaper than retail. However it does require you to set up a meeting with a sales rep who will then work tirelessly to expand the number of services you use and require longer commitments etc. With “enterprise” pricing, the sticker price is just the opening number in a negotiation, and basic theory tells us that the opening number must be large since it sets a ceiling.


There are many products which cater to both individual users and enterprises, and these will often charge individual users more.

WinRAR is 30 EUR per user when buying a single license, 9 EUR when buying 100 licenses.


don’t even get the option to force SSO unless you “contact sales” for the enterprise tier from what I can see :/


Woah. Bold move.


CEO will be the first job that AI replaces


I’ve got my stuff rigged to hit mixtral-8x7, and dolphin locally, and 3.5-turbo, and the 4-series preview all with easy comparison in emacs and stuff, and in fairness the 4.5-preview is starting to show some edge on 8x7 that had been a toss-up even two weeks ago. I’m still on the mistral-medium waiting list.

Until I realized Perplexity will give you a decent amount of Mistral Medium for free through their partnership.

Who is sama kidding they’re still leading here? Mistral Medium destroys the 4.5 preview. And Perplexity wouldn’t be giving it away in any quantity if it had a cost structure like 4.5, Mistral hasn’t raised enough.

Speculation is risky but fuck it: Mistral is the new “RenTech of AI”, DPO and Alibi and sliding window and modern mixtures are well-understood so the money is in the lag between some new edge and TheBloke having it quantized for a Mac Mini or 4070 Super, and the enterprise didn’t love the weird structure, remembers how much fun it was to be over a barrel to MSFT, and can afford to dabble until it’s affordable and operable on-premise.

“Hate to see you go, love to watch you leave”.


Here's a glossary to understand this post:

- mixtral-8x7 or 8x7: Open source model by Mistral AI.

- Dolphin: An uncensored version of the mistral model

- 3.5-turbo: GPT-3.5 Turbo, the cheapest API from OpenAI

- 4-series preview OR "4.5 preview": GPT-4 Turbo, the most capable API from OpenAI

- mistral-medium: A new model by Mistral AI that they are only serving through AI. It's in private beta and there's a waiting list to access it.

- Perplexity: A new search engine that is challenging Google by applying LLM to search

- Sama: Sam Altman, CEO of OpenAI

- RenTech: Renaissance Technologies, a secretive hedge fund known for delivering impressive returns improving on the work of others

- DPO: Direct Preference Optimization. It is a technique that leverages AI feedback to optimize the performance of smaller, open-source models like Zephyr-7B1.

- Alibi: a Python library that provides tools for machine learning model inspection and interpretation2. It can be used to explain the predictions of any black-box model, including LLMs.

- Sliding window: a type of attention mechanism introduced by Mistral-7B3. It is used to support longer sequences in LLMs.

- Modern mixtures: The process of using multiple models together, like "mixtral" is a mixture of several mistral models.

- TheBloke: Open source developer that is very quick at quantizing all new models that come out

- Quantize: Decreasing memory requirements of a new model by decreasing the precision of weights, typically with just minor performance degradation.

- 4070 Super: NVIDIA 4070 Super, new graphics card announced just a week ago

- MSFT: Microsoft


I asked ChatGPT to rewrite the original post using your glossary, which worked well:

I've set up my system to use several AI models: the open-source Mixtral-8x7, Dolphin (an uncensored version of Mixtral), GPT-3.5 Turbo (a cost-effective option from OpenAI), and the latest GPT-4 Turbo from OpenAI. I can easily compare their performances in Emacs. Lately, I've noticed that GPT-4 Turbo is starting to outperform Mixtral-8x7, which wasn't the case until recently. However, I'm still waiting for access to Mistral-Medium, a new, more exclusive AI model by Mistral AI.

I just found out that Perplexity, a new search engine competing with Google, is offering free access to Mistral Medium through their partnership. This makes me question Sam Altman, the CEO of OpenAI, and his claims about their technology. Mistral Medium seems superior to GPT-4 Turbo, and if it were expensive to run, Perplexity wouldn't be giving it away.

I'm guessing that Mistral AI could become the next Renaissance Technologies (a hedge fund known for its innovative strategies) of the AI world. Techniques like Direct Preference Optimization, which improves smaller models, along with other advancements like the Alibi Python library for understanding AI models, sliding windows for longer text sequences, and combining multiple models, are now well understood. The real opportunity lies in quickly adapting these new technologies before they become mainstream and affordable.

Big companies are cautious about adopting these new structures, remembering their dependence on Microsoft in the past. They're willing to experiment with AI until it becomes both affordable and easy to use in-house.

It's sad to see the old technology go, but exciting to see the new advancements take its place.


The GP did a great job summarizing the original post and defining a lot of cryptic jargon that I didn't anticipate would generate so much conversation, and I'd wager did it without a blind LLM shot (though these days even that is possible). I endorse that summary without reservation.

And the above is substantially what I said, and undoubtedly would find a better reception with a larger audience.

I'm troubled though, because I already sanitize what I write and say by passing it through a GPT-style "alignment" filter in almost every interaction precisely because I know my authentic self is brash/abrasive/neuro-atypical/etc. and it's more advantageous to talk like ChatGPT than to talk like Ben. Hacker News is one of a few places real or digital where I just talk like Ben.

Maybe I'm an outlier in how different I am and it'll just be me that is sad to start talking like GPT, and maybe the net change in society will just be a little drift towards brighter and more diplomatic.

But either way it's kind of a drag: either passing me and people like me through a filter is net positive, which would suck but I guess I'd get on board, or it actually edits out contrarian originality in toto, in which case the world goes all Huxley really fast.

Door #3 where we net people out on accomplishment and optics with a strong tilt towards accomplishment doesn't seem to be on the menu.


I would have said there is no problem with your style (nothing brash/abrasive), but you used a lot of jargon, that people who are not very deep into LLMs (large language models) would not understand. Interests of hackernews visitors are very diverse, not everyone follows LLMs that closely.


This was my take exactly. I read the original and thought, "Wow, this sounds like really interesting stuff this poster us excited about. I wish I knew what the terms meant, though. I'll have to come back to this when I have more time and look up the terms."

I was pleasantly surprised to find a glossary immediately following, which tells me it wasn't the tone of the post, but the shorthand terminology that was unfamiliar to me that was my issue.

I think writing in "Ben's voice" is great. There are just going to be times when your audience needs a bit more context around your terminology, that's all.


I think the only thing you really need to do is unpack your jargon so people who aren't exactly you can understand what you're saying. Even on this site, there are folks with all sorts of different experiences and cultural context, so shortcuts in phrasing don't always come across clearly.

For example, "in which case the world goes all Huxley really fast." "Huxley" apparently means something to you. Would it mean anything at all to someone who hasn't read any Aldous Huxley? As someone who _has_, I still had to think about it -- a lot. I assumed you're referring to a work of his literature rather than something he actually believed, as Huxley's beliefs about the world certainly had a place for the contrarian and the original.

Further, I assume you are referring to his most well-known work, _Brave New World_, rather than (for example) _Island_, so you're not saying that people would be eating a lot of psychedelic mushrooms and living together in tolerant peace and love.

I don't at all think you need to sound like GPT to be a successful communicator, but you will be more successful the more you consider your audience and avoid constructions that they're unlikely to be able to understand without research.


People aren’t passing you through a filter because you are brash and undiplomatic and “unaligned”, it’s because your communication style is borderline incomprehensible.


FWIW I found it comprehensible just jargon-laden


I used to struggle a lot in communication for talking to people in the authentic self way you just described. Being too direct and telling my point of view in such a way has caused tension with family, colleagues and the girlfriend.

The moment I change the way I talk and say instead of "That's bullshit, let's move away from it" to "That could be a challenging and rewarding experience", and I can already see the advantage.

I rather like to talk the way I want, but I see it as challenging and not that rewarding as people seem to get more sensitive. That made me wonder if the way GPT-style chatbots communicate with humans would make humans expect the same way of communication from other humans.


Porque no los dos? While I truly appreciate your OP and could grok it even though I don't know the tech, the summary and rewrites saved me a ton of googling. I hope one day we have a 'see vernacular/original' button for all thought and communication so people can choose what level to engage in without the author having to change their communication style. Translation for personal dialects, so to say.


Real Ben >> GPT Ben. However if you are going to the wider world you problem need to self varnish a lot (i know i would have to). You are fine in here!

What you are alluding to is quite similar to the that “instagram face” that everyone pursues and self filters for except its more about your communication and thoughts. Also the argument that you need to reach a wider audience i dint think isn't necessary unless you want the wider audience to comment and engage.

The internet is the great homogenizer soon(ish) we will be uniform.


I think this is just in the short-term. In the long term GPTs will retain our personality while making the message more understandable - which I think is the most important thing. Although McLuhan would disagree. Benefits, though, might include AI making cross-cultural translation so you can converse with someone with a different language and very different experiences and still understand each other. I think that's good? Maybe?


Personally I had no problem with your post being too abrasive or anything like that, it was just hard to understand because it was full of jargon.


Your posts are my favorite thing about Hacker News, both because of the things you're saying and the way you're saying them; please don't let anyone tell you otherwise.


Thank you! Amazing how difficult it is to keep up with all of the new jargon given how fast it's evolved. I had no idea that mistral-medium was so great.


Crazy, your post feels like downloading martial arts in the Matrix. I read the parent, didn't get a thing and though the guy was on substances. Read yours. Read the parent again. I speak AI now! I'm going to use this new power to raise billions!


I'm clearly spending far too much time tuning/training/using these things if a glossary to make my post comprehensible to HN is longer than my remark: thank you for correcting my error in dragging this sub-sub-sub-field into a thread of general interest.


That's an impressive list of jargon whaha

Love how deep the rabbithole has gone in just a year. I am unfortunately in the camp of understanding the post without needing a glossary. I should go outside more :|


I think you've done a great explanation expansion except I believe it's ALiBi ("Attention with Linear Biases Enables Input Length Extrapolation"), a method of positional encoding (i.e. telling the Transformer model how much to weight a distant token when computing the current output token). This has been used on various other LLMs[2].

[1]: https://arxiv.org/abs/2108.12409

[2]: n.b. Ofir Press is co-creator of ALiBi https://twitter.com/OfirPress/status/1654538361447522305


This is indeed what I was referring to and along with RoPE and related techniques is a sort of "meta-attention" in which a cost-effective scalar pointwise calculation can hint the heavyweight attention mechanism with super-linear returns in practical use cases.

In more intuitive terms, your bog-standard transformer overdoes it in terms of considering all context equally in the final prediction, and we historically used rather blunt-force instruments like causally masking everything to zero.

These techniques are still heuristic and I imagine every serious shop has tweaks and tricks that go with their particular training setup, but the Rope shit in general is kind of a happy medium and exploits locality at a much cheaper place in the overall computation.


My understanding is that Mistral uses a regular 4K RoPE that is "extends" the window size with SWA. This is based on looking at the results of Nous Research's Yarn-Mistral extension: https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k and Self-Extend, both of which only apply to RoPE models.

There are quite a few recent attention extension techniques recently published:

* Activation Beacons - up to 100X context length extension in as little as 72 A800 hours https://huggingface.co/papers/2401.03462

* Self-Extend - a no-training RoPE modification that can give "free" context extension with 100% passkey retrieval (works w/ SWA as well) https://huggingface.co/papers/2401.01325

* DistAttention/DistKV-LLM - KV cache segmentation for 2-19X context length at runtime https://huggingface.co/papers/2401.02669

* YaRN - aforementioned efficient RoPE extension https://huggingface.co/papers/2309.00071

You could imagine combining a few of these together to basically "solve" the context issue while largely training for shorter context length.

There are of course some exciting new alternative architectures, notably Mamba https://huggingface.co/papers/2312.00752 and Megabyte https://huggingface.co/papers/2305.07185 that can efficiently process up to 1M tokens...


imo mistral-medium is worse than mixtral. Do you have API access?


Thank you!


As someone who follows AI pretty closely, this was unbelievably helpful in understanding the parent post. It's crazy how much there is to keep on top of if you don't want to fall behind everything that is going on in AI at the moment.


Did you just paste that into an LLM and asked it to create a glossary? :-P

(but seriously: Thanks !)


Emil didn't, but I did (and yeah, it's useless):

Mixtral-8x7: This appears to be a technical term, possibly referring to a software, framework, or technology. Its exact nature is unclear without additional context.

Dolphin locally: "Dolphin" could refer to a software tool or framework. The term "locally" implies it is being run on a local machine or server rather than a remote or cloud-based environment.

3.5-turbo: This could be a version name or a type of technology. "Turbo" often implies enhanced or accelerated performance.

4-series preview: Likely a version or iteration of a software or technology that is still in a preview or beta stage, indicating it's not the final release.

Emacs: A popular text editor used often by programmers and developers. Known for its extensibility and customization.

Mistral Medium: This might be a product or service, possibly in the realm of technology or AI. The specific nature is not clear from the text alone.

Perplexity: Likely a company or service provider, possibly in the field of AI or technology. They seem to have a partnership offering involving Mistral Medium.

RenTech of AI: RenTech, or Renaissance Technologies, is a well-known quantitative hedge fund. The term here is used metaphorically to suggest a pioneering or leading position in the AI field.

DPO, Alibi, and sliding window: These are likely technical concepts or tools in the field being discussed. Without additional context, their exact meanings are unclear.

Modern mixtures: This could refer to modern algorithms, techniques, or technologies in the field of AI or data science.

TheBloke: This could be a reference to an individual, a role within a community, or a specific entity known for certain expertise or actions.

4070 Super: This seems like a model name, possibly of a computer hardware component like a GPU (Graphics Processing Unit).

MSFT: An abbreviation for Microsoft Corporation.

On-premise: Refers to software or services that are operated from the physical premises of the organization, as opposed to being hosted on the cloud.


This is actually hilarious. It looks like a student who did not learn for the exam but still tries their best to scratch a point or two by filling the page with as many reasonnable sounding statements (a.k.a. "bullshit") as they can. Not that I expect more of a language model, no matter how "large".


This makes it actually more human (as in this is how a real student would have reacted) and therefore AGI-like than i would have anticipated.


> Not that I expect more of a language model, no matter how "large".

That's a weirdly dismissive statement. The fundamental problem is that a lot of these terms are from after the AI's cutoff point. It's perfectly able to handle terms like "Emacs", "RenTech" or "MSFT", and it can guess that "4070 Series" probably refers to a GPU.

ChatGPT in a few years will probably be perfectly able to produce the correct answers.

(Actually, ChatGPT consistently claims its current cutoff is April 2023, which should let it give a better answer, so I'm taking a few points off my explanation. But it still feels like the most probable one.)


GPT4 is able to look terms up on the Internet if you ask, and will give you a list of specs on it, with a cite so you know it's not hallucinating them.

"4070 Super": https://chat.openai.com/share/0aac7d90-de65-41d0-9567-8e56a0...

"Mixtral-8x7": https://chat.openai.com/share/8091ac61-d602-414c-bdce-41b49e...


I love you, Emil


Thanks for this. I was initially wondering what this new GPT 4.5 model was and if I had somehow missed out on something big.


I have 20 years of software development experience, and I couldn’t understand anything you said. Is there a dictionary for this new lingo, or am I just too mid?


He speaks very unclearly, instead of saying GPT-4-turbo he says 4.5 preview. 4.5 is invention of his.

Also mixtral medium - no idea of what he means by that.

Not to mention a claim that mixtral is as good as gpt-4. It’s on the quality of gpt3.5 at best, which is still amazing for an open source model, but a year behind openai


Mistral-medium is a model that mistral serves only via API since it's a prototype model. It hasn't been released yet and it's bigger than the mixtral-8x7b model


Sorry, but there's little that's unclear about what he said.

"mixtral medium" is just a typo: he means mistral-medium.

And GPT 4.5 is certainly not an "invention of his". Whether it exists or not (which is debatable, OpenAI said it was just mentioned in a GPT 4 hallutination and caught on), it' s a version name thrown around for like a month in forums, blog posts, news articles and such.


Being thrown around in forums doesn't make it more true..


Which is neither here nor there, since I din't claim that it is true.

I claimed it's not an "invention of his [benreesman ]", but a term that was already out there.


But it makes it not an invention of benreesman


I just spoke all night to 8x7B and can say that it sucks much less than 3.5. It doesn’t screw up and apologize all the time (and screw up again) and doesn’t repeat what I just said verbatim. That is on topics I have a decent expertise in myself. Never had this experience of periodically forgetting that it’s not a human company with 3.5.


How are you accessing it?


Local setup, “text generation webui”, TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF (Q4_K_M) on HF. You can run it on a decent intel cpu, takes around 32.5GB of ram including os (8gb for me). GPU with tensorcores can speed up few layers if you have one, but isn’t required. I get around 2.5-3 t/s with 8700 and 4070ti, that’s enough for chats that require some thinking.

Edit: I was using 2k window, a larger one would probably eat more ram. But even with 2k it didn’t feel like it loses context or something.


For macOS and Linux, Ollama is probably the easiest way to try Mixtral (and a large number of models) locally. LM Studio is also nice and available for Mac, Windows, and Linux.

As these models can be quite large and memory intensive, if you want to just give it a quick spin, huggingface.co/chat, chat.nbox.ai, and labs.pplx.ai all have Mixtral hosted atm.


You can access it through Poe.com.


I think with Mixtral Medium they mean MoE 2x13B which is on top on huggingface leaderboard? It is still not close to 8x175B, but size alone is not most important factor. With smarter training methods and data it is possible we will see performance similar to gpt-4 in open source mixture of experts of smaller sizes.


On reflection this thread is pretty clearly of general interest and my comment was more jargon that language, I hang out in ML zones too much.

For a broad introduction to the field Karpathy's YouTube series is about as good as it gets.

If you've got a pretty solid grasp of attention architectures and want a lively overview of stuff that's gone from secret to a huge deal recently I like this treatment as a light but pretty detailed podcast-type format: https://arize.com/blog/mistral-ai


It was entertaining to follow and cross check for sanity - admittedly I first read "RenTech" as a reference to https://theitcrowd.fandom.com/wiki/Reynholm_Industries

.. which seemed to fit suprisingly well.


Ironically, one of the best scenes from the series (Reynholm trying to wake his desktop by shouting "HELLO COMPUTER! HELLO!!" for hours and hours) feels very "ai"...


I'm trying to go deeper, I'd be curious to know what other ML zones you keep track of?

- I know r/LocalLlama, huggingface's Daily Papers and TheBloke. Most of what Youtube throws at me is horrific clickbait. I feel like there are probably whole communities I'm missing out on.


I appreciate it, will check it out :)


Now you know how your girlfriend feels when she hears you speak with other software people :) Excuse my assumptions if they are incorrect. I'm making projections from my own point of view.


Romeo here flexing about how he has an actual girlfriend. Sorry, Casanova, we’re busy refining mobile-ready inference models and leetcoding while you go on your so-called “dates”


We developers only go on ISO 8601 dates, which somewhat limits the selection unfortunately.


Reddit-inspired humor on HN done right, hat tip, almost impossible to do (and I've tried for 15 years with less success).

lulz.


tbh I expected serious downvotage


[flagged]


You keep using this word "incel". I don't think it means what you think it means.

This is a classical joke about how nerds don't get romance/dating. People have been doing that since the time of Goethe.

Not something about bitter men who can't get laid and hate on women for it (which would be the incel case).


A nerd finds a talking frog by a pond. The frog says, "Hey, I'm not really a frog. I'm a beautiful princess, and if you kiss me, I'll turn back into a princess, and I'll grant you one wish."

The nerd thinks for a moment, then puts the frog in his pocket and continues walking.

The frog says, "Aren't you going to kiss me and make a wish?"

The nerd replies, "Nah, I'd rather have a talking frog."


Reminds me of something I watched recently - "Struggling to understand Gen Z Slang" [0].

[0]: https://www.youtube.com/watch?v=qGx4VtwMnfM


Just follow https://www.reddit.com/r/localllama to keep up to date on this stuff


Thanks!


"just" is carrying a loooot of weight there fyi. but given your 20 years... you should have some appreciation of that :)


Yeah that was completely incoherent to me as well.


Same bro


My apologies to both you and grandparent, I faux-pass'd on using such sloppy jargon in a thread of general interest. If I can be useful in decoding some of that and helping to keep this topic accessible to the broader community I'd be happy to answer any questions about what I meant. I've cited a few useful resources elsewhere in the thread and am always eager to talk shop on topics like this, so fire away if you have any questions about the field or what the hell I was talking about :)


A paragraph by paragraph "dumbed down" translation of your original words would be pretty neat to have for starters. Both to understand what you mean but also to understand the lingo.


I'm hardly the best person to give a point-by-point on how modern neural networks work. The original paper that kind of brought together a bunch of ideas that were floating around is called "Attention is All You Need" in 2017 (and those folks are going to win a Turing almost certainly) and built on a bunch of `seq2seq` and Neural Turing Machine stuff that was in the ether before that.

Karpathy has a a great YouTube series where he gets into the details from `numpy` on up, and George Hotz is live-coding the obliteration of PyTorch as the performance champion on the more implementation side as we speak.

Altman being kind of a dubious-seeming guy who pretty clearly doesn't regard the word "charity" the same way the dictionary does is more-or-less common knowledge, though not often mentioned by aspiring YC applicants for obvious reasons.

Mistral is a French AI company founded by former big hitters at e.g. DeepMind that brought the best of the best on 2023's public domain developments into one model in particular that shattered all expectations of both what was realistic with open-weights and what was possible without a Bond Villain posture. That model is "Mixtral", an 8-way mixture of experts model using a whole bag of tricks but key among them are:

- gated mixture of experts in attention models - sliding window attention / context - direct-preference optimization (probably the big one and probably the one OpenAI is struggling to keep up with, probably more institutionally than technically as probably a bunch of bigshots have a lot of skin in the InstructGPT/RLHF/PPO game)

It's common knowledge that GPT-4 and derivatives were mixture models but no one had done it blindingly well in an open way until recently.

SaaS companies doing "AI as a service" have a big wall in front of them called "60%+ of the TAM can't upload their data to random-ass cloud providers much less one run by a guy recently fired by his own board of directors", and for big chunks of finance (SOX, PCI, bunch of stuff), medical (HIPAA, others), defense (clearance, others), insurance, you get the idea: on-premise is the play for "AI stuff".

A scrappy group of hackers too numerous to enumerate but exemplified by `ggerganov` and collaborators, `TheBloke` and his backers, George Hotz and other TinyGrad contributors, and best exemplified in the "enough money to fuck with foundation models" sense by Mistral at the moment are pulling a Torvalds and making all of this free-as-in-I-can-download-and-run-it, and this gets very little airtime all things considered because roughly no one sees a low-effort path to monetizing it in the capital-E enterprise: that involves serious work and very low shady factors, which seems an awful lot like hard work to your bog-standard SaaS hustler and offers almost no mega data-mining opportunity to the somnobulent FAANG crowd. So it's kind of a fringe thing in spite of being clearly the future.


What is "Mixtral Medium"? Is it a next version of Mixtral and can it be downloaded somewhere?


No. Mixtral Medium is one of the proprietary models by Mistral AI, and only accessible via their API: https://docs.mistral.ai/

It currently ranks 4 in chatbot arena leaderboard (slightly behind GTA-4 ELO rating): https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...


surely this guy is a bot.


or spending too much time with bots...


Now that seems more likely for better or worse :)


For a bot/spam thing, 16 years seems like a lot of planning and/or effort to put in? I don't know the typical age of people on HN these days but I can't imagine there are a lot of 2008 joins still floating around as a percentage of all commenters.


We're all too mid. Luckily, these days we hoomans have AIs to help us understand other hoomans. Here is Gpt-4-1106-preview and Perplexity.ai versions trying to shed some light what was being said. https://pastebin.com/JuxfdrLg

Hilariously neither knows who is sama (Sam Altman, the Drama King of OpenAI), nor do they recognize when they themselves are being discussed.

Reading the responses in full also gives you a glimpse on specific merits or weaknesses of these systems, namely how up to date is their knowledge and lingo, explaining capabilities, and ability to see through multiple layers of referencing. Also showcases whether the AIs are willing to venture guessing to piece together some possible interpretation for hoomans to think about.


I absolutely love pointing these things at each other and watching them go.

I screen-capped my take on this to prove* that I was actually wiring all this stuff up and plug my nascent passion/oss project, but it's really funny comparing them either way: https://imgur.com/WDrqxsz


He is all over the place, mixing tech specifics with unproven models.

Basically, he said he is happy with Mistral 8x7B and thinks it is on par/better comparing to OpenAI's closed source model.


Well the lmsys leaderboard interestingly places it just 10 Elo under GPT 4 entries, so one would have to consider it somewhat on-par.


Oh thank you, I was reading and none of that made any sense to me. I thought it could be a presentation of some dumb AI output. Now I see I’m not alone.


That gave me an idea, here is what I got from Copilot:

You have set up your system to run different AI models and compare their performance using a text editor. You are using Mixtral-8x7, a high-quality open-source model developed by Mistral AI, Dolphin, an emulator for Nintendo video games, 3.5-Turbo, a customized version of GPT-3.5, a powerful natural language model, and 4-Series Preview, a new version of the BMW sports coupe. You have noticed that the 4.5-Preview, an upcoming update of GPT-3.5, is slightly better than Mixtral-8x7, which used to be a close match. You are still waiting to access Mistral-Medium, a prototype model that is even better than Mixtral-8x7, but only available to a limited number of users.

You have discovered that Perplexity, an AI company that provides information discovery and sharing services, offers free access to Mistral-Medium through their partnership with Mistral AI. You think that Perplexity is making a mistake by giving away such a valuable model, and that they are underestimating the superiority of Mistral-Medium over the 4.5-Preview. You also think that Mistral AI is the new leader in the AI industry, and that their techniques, such as DPO (Data Processing Optimization), Alibi (a library for algorithmic accountability), sliding window (a method for analyzing time series data), and modern mixtures (a way of combining different models), are well-known and effective. You believe that the advantage of Mistral AI lies in the gap between their innovation and the ability of other developers to replicate it on cheaper and more accessible hardware. You also think that the enterprise market is not fond of the complex structure of GPT-3.5 and its variants, and that they prefer to use Mistral AI's models, which are more affordable and operable on their own premises.

You end your text with a quote from the movie Armageddon, which implies that you are leaving a situation that you dislike, but also admire.


I lost it when it brought in the BMW. Thanks for the laughs.


I dont think it has anything to do with a BMW sports coupe.

What I am confused about though is it seems like the parent is mentioning models beyond the GPT4 instance I currently have access to. I checked their twitter and I have seen no anouncement for any 4.5 or 4 series previews. Is this just available to people using the API or did I miss something?


> I dont think it has anything to do with a BMW sports coupe

Well, the Paul Ricard circuit in France has a straight called Mistral. Plenty of BMWs have been there for sure, and a zillion other cars.

I wonder if that could have confused the AI a little in combination with other hints. Turbo?

If that's a thing maybe we should start picking our names not only to make them googlable but also not to confuse LLMs at least for the next few years. Months?


The OpenAI API is currently advertising their preview to my clients as `gpt-4-1106-preview`. I've been calling the Q4 2023 4-series stuff `4.5` or `4.5-preview` to distinguish it from the pre-big-announcement stuff.

I don't recall if I saw any press calling anything `4.5`, but it's a different model in some important ways (one suspects better/cheaper quantization at a minimum) and since they've used `.5` for point releases in the past it seemed the most consistent with their historical versioning.


OP might be referring to gpt-4-turbo.



What kind of emacs distribution is on the screenshot?


It's a new theme and/or distribution that I'm working on under the working title `hyper-modern`. It clearly takes inspiration from things like Doom (and I'm using their modeline which is hard to improve on) but it's mostly ground up and AI-first.

It's heavily integrated with my custom model server and stuff and I'm slowly getting it integrated with other leading tools (vscode and nvim and stuff).

I plan to MIT it all once it's at a reasonable RC. If I get there it will be available at `https://hyper-modern.ai` and `https://github.com/hyper-modern-ai`.

Thanks for asking!


"Dolphin, an emulator for Nintendo video games", but in this context it refers to "Dolphin, an open-source and uncensored, and commercially licensed dataset and series of instruct-tuned language models based on Microsoft's Orca paper." https://erichartford.com/dolphin


Its a specific lingo evolved over last two years with rise of llms. Those who have been following development of LLMs would understand it.


Just had to say that the original comment, and then yours right after, is a great combo. Laughed my ass off :)


Normally I'm the mid-ass, out of touch old guy, it's a rare chance to be the one using the insider lingo :)


respectfully, 20 yrs of software dev experience doesn't entitle you to understand the last 2 months of AI if you didn't spend the effort to keep up. jargon happens, its not your fault but also people need to communicate thoughts concisely given a base of knowledge. its ok to ask of course but the rest of us who have been keeping up can parse this well enough (even though I disagree with some of the assertions)


I'm snarkier than most on HN and have the scars to prove it, and I do miss the RTFM-by-default vibe of the early days, but on this one topic as you can see I'm going out of my way to apologize for being cryptic and try to include everyone in the conversation because this shit matters a lot even by our standards.

Give me a hand?


i do my part :) i do lots of explaining in audio and text https://latent.space/p/dec-2023

am just here to vibe in my down time


Glad to make your acquaintance as my folks used to say. I'll check out the link and remember the handle.


They are referring to LLM models. It‘s not about how much software dev experience you have


I have heard of LLMs, and understand most everything posted on HN, except quantum computing stuff.


I think in this case it's much like the difference between understanding what Pokemon are, and actively playing each game so you know the names of most Pokemon.


Half LLM, half boomer


> Mistral Medium destroys the 4.5 preview.

On what metrics? LMSys shows it does well but 4-Turbo is still leading the field by a wide margin.

I am using 8x-7b internally for a lot of things and Mistral-7b fine-tunes for other specific applications. They're both excellent. But neither can touch GPT-4-turbo (preview) for wide-ranging needs or the strongest reasoning requirements.

https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

EDIT: Neither does mistral-medium, which I didn't discuss, but is in the leaderboard link.


Keep in mind that modern quantitative approaches to LLM evaluation have been effectively co-designed with the rise of OpenAI, and folks like Ravenwolf routinely disagree with the leaderboards.

There's also very little if any credible literature on what constitutes statistically significant on MMLU or whatever. There's such a massive vested interest from so many parties (the YC ecosystem is invested in Sam, MSFT is invested in OpenAI, the US is invested in not-France, a bunch of academics are invested in GPT-is-borderline-AGI, Yud is either a Time Magazine cover author or a Harry Potter fanfic guy, etc.) in seeing GPT-4.5 at the top of those rankings and taking the bold one at < 10% lift as state of the art that I think everyone should just use a bunch of them and optimize per use case.

I have my own biases as well and freely admit that I love to see OpenAI stumble (no I didn't apply to work there, yes I know knuckleheads who go on about the fact they do).

And once you factor in "mixtral is aligned to the demands of the user and GPT balks at using profanity while happily taking sides on things Ilya has double-spoken on", even e.g. MMLU is nowhere near the whole picture.

It's easy and cheap to just try both these days, don't take my word for which one is better.


> It's easy and cheap to just try both these days, don't take my word for which one is better.

I literally use 8x-7b on my on-prem GPU cluster and have several fine tunes of 7b (which I said in the previous post). I've used mistral-medium.

GPT-4-turbo is better than them all on all benchmarks, human preference, and anything that isn't biased vibes. My opinion - such that it is - is that GPT-4-turbo is by far the best.

I have no vested interest in it being the best. I'd actually prefer if it wasn't. But all objective data points to it being the best and most lived experiences that are unbiased agree (assuming broad model use and not hyperfocused fine-tunes; I have Mistral-7b fine-tunes beating 4-turbo in very limited domains, but that hardly counts).

The rest of your post I really have no idea what's going on, so good luck with all that I guess.


Mistral Medium beats 4.5 on the censorship benchmark. It doesn't refuse to help with anything that could be vaguely non-PC or could potentially be used to hurt anyone in the wrong hands, including dangerously hot salsa recipes.


That's not a metric.

That's a use case.

Certainly, no one here is arguing that there are things openai refuses to allow, and given that the effectiveness of using GPT4 on them is literally zero, a sweet potato connected to a spring and keyboard will "beat" GPT-4, if that's your scoring metric.

If you want a meaningful comparison you need tasks that both tools are capable of doing, and then see how effective they are.

Claiming that mistral medium beats it is like me claiming the RenderMan beats DALLE2 at rendering 3d models; yes, technically they both generate images, but since it's not possible to use DALLE2 to render a 3d model, it's not really a meaningful comparison is it?


> If you want a meaningful comparison you need tasks that both tools are capable of doing, and then see how effective they are.

The fact it’s incapable of simple requests that an alternative can is absolutely part of a worthwhile comparison.


You’re just twisting what “best” means to suit your bias.

That is not a measure of how sophisticated and capable a model is.

GPT4 is a more sophisticated, more capable mode than mistral.

If that doesn’t make it the “better” for you, that’s fine; but any attempt to argue about the capabilities of the models is misguided.

Restrictions placed on a model are an orthogonal concern to its capabilities.

…but sure, you can invent some benchmarks to score models on other criteria, which is entirely valid.

It’s perfectly fair to say that GPT4 doesn’t top all possible metrics… only the meaningful ones about model capabilities.


Semantics.

Both tools are generative systems that produce text in response to a prompt. If Mistral was mute on random topics for no other reason that its makers dislike talking about that, would you say it doesn't count?


I'm a big proponent of freedom in this space (and remain one), but Dolphin is fucking scary.

I don't have any use cases for crime in my life at the moment beyond wanting to pirate like Adobe Illustrator before signing up for an uncancelable subscription, but it will do arbitrary things within it's abilities and it's google with a grudge in terms of how to do anything you ask. I stopped wanting to know when it convinced me it could explain how to stage a coup d'etat. I'm back on mixtral-8x7b.


Agree with this. I would say that the rate of progress from Mistral is very encouraging though in terms of having multiple plausible contenders for the crown.


> Keep in mind that modern quantitative approaches to LLM evaluation have been effectively co-designed with the rise of OpenAI, and folks like Ravenwolf routinely disagree with the leaderboards.

Sorry but you're talking complete nonsense here. The benchmark by LMSys (chatbot arena) cannot be gamed, and Ravenwolf is a random-ass poster with no scientific rigor to his benchmarks.


Cannot be gamed? C'mon now... You could pay a bunch of people to vote for your model in the arena.


No you can't, because you actually don't know which model is which when you vote.


Do only the initial votes count? Because after I made an initial choice I was then put in a session where I saw the name of both of the AI. I made subsequent votes in that session where I could see their names.



It just feels like “what LLM is better” becomes new “what GPU is better” type of talk. It’s great to find a clear winner, but at the end the gap between the leaders isn’t an order of magnitude.


These days the question is more about which LLM is second best. It’s very tight while ChatGPT 4 is in its own league.


I think people are missing the context that the prices of even the largest LLMs trend towards $0 in the medium term. Mistral-medium is almost open source, and we are still early days


Dolphin-mixtral is incredible for the size that it is. But I'm curious, have you tried Goliath-120b or the new `Mixtral_34Bx2_MoE_60B` (it's named Mixtral but the base is actually Yi).

Goliath is too big for my system but Mixtral_34Bx2_MoE_60B[1] is giving me some really good results.

PSA to anyone that does not understand what we're talkign about: I was new to all of this until two weeks ago as well. If you want to get up to speed with the incredible innovation and home-tinkering happening with LLMs, you have to checkout - https://www.reddit.com/r/LocalLLaMA/

I believe we should be at GPT4 levels of intelligence locally sometime later this year (Possibly with the release of Llama3 or Mistral Medium open-model).

[1] - https://huggingface.co/TheBloke/Mixtral_34Bx2_MoE_60B-GGUF


Speculative musings beckon, and we dare to embrace them. The crux of the matter appears to be the chasm that separates novel advancements from the moment they are quantified for mainstream consumption. Retaining vivid memories of past entanglements with industry titans, circumspectly explore and exploit these innovations until they become both affordable and practicable for on-premise utilization, finally unveiling competitive prowess. The overarching question looms large. Perhaps, Mistral has not yet amassed the financial resources commensurate with such largesse.

"My hips don't lie."



Not bad! It's having trouble with syllables tho.


Was this generated by some AI? It it a parody?


I've made similar apologies upthread but I'm passionate about this being an inclusive conversation and so I'm trying to respond to everyone who I confused with all the jargon.

The trouble with the jargon is that it obfuscates to a high degree even by the standards of the software space, and in a field where the impact on people's daily lives is at the high end of the range, even by the standards of the software space.

HN routinely front-pages stuff where the math and CS involved is much less accessible, but for understandable reasons a somewhat tone-deaf comment like mine is disproportionately disruptive: people know this stuff matters to them either now or soon, and it's moving as quickly as anything does, and it's graduate-level material.

If you have concrete questions about what probably looks like word salad I'll do my best to clarify (without the aid of an LLM).


Not sure what all the fuss is about about the incomprehensibility of this. It's a densely packed comment, information wise, and expects familiarity with the field, but there's nothing really that obscure about it.

I might not know half of the references like "sama" or "TheBloke", but I could understand the context of them all. Like:

"the lag between some new edge and TheBloke having it quantized for a Mac Mini or 4070 Super,"

Not sure who TheBloke is, but he obviously means "between some new (cutting) edge AI model, and some person scaling it to run on smaller computers with less memory".

Similarly, not sure who Perplexity is, but "Until I realized Perplexity will give you a decent amount of Mistral Medium for free through their partnership" basically spells out that they're a service provider of some kind, that they have partnered with Mistral AI, and you get to use the Mistral Medium model through opening a free account on Perplexity.

I mean, duh!


I'm still waiting for the AI encabulator.


Had a good laugh about your comment, then realized that this is _exactly_ what AI would be really good at...

Basically let an AI hallucinate on some technical subject. It would make a great script for a new encabulator video.


Care to share, what are you using it for?

I'm curious, because I'm gathering some usecases; so that I could share that internally in the company to provide better education on, what LLMs do and how they work.


Any chance you could post some comparisons between Mistral medium and gpt-4 turbo? I'm curious where you think it's more impressive, I hadn't spent the time to evaluate it yet.


Go to the Arena (side-by-side) tab on LMsys and you can try it yourself!

https://chat.lmsys.org/

It's a great tool they make available.


Can you share some examples of how you are using it? Mixtral that is? What's your setup? What's your flow/workflow?


I screenshotted my emacs session upthread in a bit of a cheeky "AI-talking-about-AI" joke: https://imgur.com/WDrqxsz.

While I heavily rely on `emacs` as my primary interface to all this stuff, I'm slowly-but-surely working on a curated and opinionated collection of bindings and tools and themes and shit for all the major hacker tools (VSCode, `nvim`, even to a degree the JetBrains ecosystem). This is all broadly part of a project I'm calling `hyper-modern` which will be MIT if I get to a release candidate at all.

I have a `gRPC` service that wraps the outstanding work by the "`ggeranov` crew" loosely patterned on the sharded model-server architectures we used at FB/IG and mercilessly exploiting the really generous free-plan offered by the `buf.build` people (seriously, check out the `buf.build` people) in an effort to give hackers the best tools in a truly modern workflow.

It's also an opportunity to surface some of the outstanding models that seem to have sunk without a trace (top of mind would be Segment Anything out of Meta and StyleTTS which obsoletes a bunch of well-funded companies) in a curated collection of hacker-oriented capabilities that aren't clumsy bullshit like co-pilot.

Right now it's a name and a few thousand lines of code too rough to publish, but if I get it to a credible state the domain is `https://hyper-modern.ai` and the code will be MIT at `https://github.com/hyper-modern-ai/`.


Feel free to push the code. Elisp should be hacky and embarrassing otherwise you are doing it wrong :)


I'm curious about your workflow including all of these, is it only for your curiousity? Do you switch between them for specific tasks, or even run them in parallel for some purpose?

Also, is anyone aware of a service that supplies API endpoints for dolphin? I'd love to experiment with it, but running locally exceeds my budget.


Curious that you mentioned "4.5-preview". What do you mean there?

To my knowledge, and I searched to confirm, GPT-4.5 is not yet released. There were some rumors and a link to ChatGPT's answer about GPT-4.5 (could also be a hallucination) but Sam tweeted it was not true.


They literally made it up.


That seems a little harsh. There was clearly what amounted to an internal point release in Q4, there was a big announcement and the historical convention on versioning is `.5` increments.

It's "unofficial" but "literally made it up" seems a bit unfair, it's not like I called it `GPT-4-Ti Founders Edition` and tried to list it on eBay.


They meant GPT-4 Turbo, which is an improvement over GPT-4.


Thanks for the insights. What is your typical Emacs workflow for using and comparing the models?


I'm running custom stuff that I plan/hope to MIT soon, but `gptel` is killer and I've substantially plagiarized feature-wise it in my own dots. (I don't intend to release anything under a more permissive license that it was published under, merely that it sets the bar on a good interface and I plan to launch nothing less capable).


I understand some of these words.

In all seriousness, are self hosted GPT alternatives really viable?


If anyone understands this post you are worth a million dollars. Get that bag!


>Alibi

Do you have a source on Mistral/Mixtral using that?


No, they could be using any of the variants of pointwise scalar trig-style embedding, one imagines it's at least a little custom to their particular training setup.

It was just an example of a modern positional encoding. I regret that I implied inside knowledge about that level of detail. They're doing something clever on scalar pointwise positional encoding but as for what who knows.


What a non sensical statement


Could a moderator change the "Teams" in the title to lowercase (as it is in the article)? Capitalizing Teams misleadingly implies it's regarding Microsoft's chat platform.


I disagree.

We shouldn't assume a basic sentence capitalised word refers to a product.

If a reference to a product is intended, we should clarify that association some other way; i.e. MS Teams.


>We shouldn't assume a basic sentence capitalised word refers to a product.

What's happening is that lowercase "sentence case" titles have become more popular and normalized so repeated exposure to that style can cause a subconscious heuristic of "Capitalized letter signifies a Brand Name or Proper Noun". You can try to advise people not to assume that but it doesn't change the type of "sentence case" titles people are now repeatedly exposed to.

The New York Times still uses "Title Case" but a lot of other newspapers switched to lowercase sentence case. Washington Post switched in 2009. And Los Angeles Times, The Boston Globe, the Chicago Tribune, the San Francisco Chronicle, Philadelphia Inquirer, etc all followed.

Other popular websites with lowercase titles include Vox, ArsTechnica, TechCrunch, etc.


Given how Microsoft has been putting OpenAI models all over their products, I was 100% certain the title referred to including ChatGPT in Microsoft Teams and only surprised it wasn’t called Co-Pilot for Teams.

Unfortunately, I don’t think I’m the only one.


I've preferred it ever since I learned the APA reference format, which uses it.


In this case the original title is "ChatGPT for teams", so the headline was actually editorialized when it was submitted, which is discouraged by HN guidelines.

Normally it would be ok to capitallize words to match what many other US publications use, but this capitalization introduces confusion. I can only speak for myself, but I made the assumption that this was an integration with MS Teams. This would have been avoided if the original title was kept.


Title case in unavoidably mentally taxing. It's trivial to convert sentence case to title case, but highly non-trivial in general to convert title case to sentence case (I know, I write code to do this :-)) To do it right, context, or even general intelligence is required. Basically, converting to Title Case looses information (capitalization) that can be important.

Title case was invented when there were much limited typographical options than now, to emphasize, well, titles. On a web page, there are so many better ways to do it, there is no reason to preserve the archaic convention.

To my eye it does not even look that nice...


> I made the assumption that this was an integration with MS Teams.

"ChatGPT for Microsoft Teams", marketed under the name Copilot, has been a Microsoft Teams feature for a while. What new thing did you expect when you read this?


I don't follow the news closely, so I was not aware that Copilot is based on ChatGPT or that MS had already integrated Copilot with Teams.


The reading process is automatic. It's quite likely that capitalized "Teams" is read by may as the product name. And it's reasonable, because in most daily contexts, "teams" refers to actual "teams" and "Teams" refers to the product. Nobody (including you) is going to teach them to stop doing that. Adapting a rather dumb capitalization rule is a lot easier.

I do agree that editors should read their topics critically and add disambiguating text (if possible and permitted).


Teams is a very generic product name. Microsoft can live with the consequences of that.

When I read the headline, I was not thinking of MS Teams. Because I only use MS Teams a few rare times a year. Mainly I use Zoom.

But even if OpenAI had an article that with the rules of headline capitalization of HN ended up reading “Dall-E adds Zoom Feature” I would have imagined that it was about being able to zoom into pictures. Not automatically assumed that it had anything to do with Zoom. Even though I use Zoom almost every day.


> Teams is a very generic product name. Microsoft can live with the consequences of that.

Quite the contrary, Microsoft earns free advertisement from this. The problem is for the people that misunderstand the heading, not for Microsoft

Ok, you would understand it correctly even the Zoom example. So?

Heading should be optimized for the most possible amount of people, there is a clear possibility that a lot of people misunderstand the title, so why not simply edit it? In that case neither you nor anyone else would understand something wrong


The vast majority of people aren’t going to make the association. Even in the HN demo.

I think people are vastly over estimating Microsoft’s ownership over ‘Teams’. Even within a tech context.

Plus, everything we’ve seen from Microsoft in their partnership with OpenAI has been Co-Pilot. Which is why I use MS Teams daily and did not make the connection.


You are wrong and the vast majority of people are making the association.

I showed the title to five colleagues and all of them assumed it referred to Teams the product. Not to mention the overwhelming majority of people in these threads.


Well, perhaps they meant "vast majority of people that count", which for them includes F7F7F7, F7F7F7, and of course F7F7F7! :-)


How do you know about "vast majority of people"? Did you run a survey?

I for one also thought it was Microsoft Teams in the heading.

If a heading misleads even a fraction of readership, for no apparent reason (what is the benefit of having a title in Title Case?), maybe it's better that the heading is changed, no?


We shouldn't. However a style guide that requires Title Case for a digital element that is very clearly a title (made clear by the various web design in which it appears) is outdated, and sentence case is more appropriate


>We shouldn't assume a basic sentence capitalised word refers to a product.

We shouldn’t but many of us do. As a title word, there’s ambiguity if it’s a proper noun or not given title styling. Given the context in this case (HN, OpenAI, ChatGPT) it was pretty difficult for my brain to not assume it was referring to Microsoft Teams so it baited me in, perhaps unintentionally. I’m not too upset about it because I knew that going in but none the less, a quick read of the title should make it obvious to call it “ChatGPT for Collaboration” or something of that nature.


When is the last time Microsoft released an AI product that wasn’t branded as Co-Pilot?


I thought it was referring to MS teams initially too.


Same here. Using the correct lowercase word "teams" would enable everyone to focus on the product.

Anything else is just wasting time.


Should do the same for Apple?


We shouldn’t, but we do. Get over it and accept reality.


Why should the sentence be capitalized in the first place? A quick scan on other submissions seems to indicate it is not true, at least for longer sentence. Why should a shorter sentence be then?

Claiming it is a title wouldn’t win the argument either. As it is not a rule that titles must have title casing. Both (title case vs first letter capital only) are valid typography of a title in English.


I also assumed that this was ChatGPT for MS Teams. The title that the webpage uses is "ChatGPT for teams" and I believe that HN tries to use the titles from the linked page verbatim.


HN autocapitalizes titles based on a standard English guideline (Chicago probably).


The title on the submitted webpage is "ChatGPT Team", singular.


>The title on the submitted webpage is "ChatGPT Team", singular.

Your reply is adding more confusion.

The "ChatGPT Team" is the title of the webpage from a user comment: https://news.ycombinator.com/item?id=38942936

(The capitalized "Team" is part of a Branded Product Name.)

The "ChatGPT for teams" (lowercase 't') is the actual original title of the submitted webpage for the whole thread: https://openai.com/chatgpt/team

(The lowercase "teams" is a generic noun to describe the intended users.)


>"ChatGPT Team" is the title of the webpage from a user comment... "ChatGPT for teams" (lowercase 't') is the actual original title of the submitted webpage

Sorry, I think you're the one adding confusion.

"ChatGPT for teams" is not the title of the submitted webpage. It's the heading of the article on that page, but the actual title, as shown in the tab strip and defined in the html header, is "ChatGPT Team":

   view-source:https://openai.com/chatgpt/team
     <meta property="og:title" content="ChatGPT Team">
Hacker New's code uses the html title element (which in this case is actually a meta element) to automatically title submissions.


>It's the heading of the article on that page, but the actual title,

Yes, I understand that but 99% in this thread talking about "title" is the article's headline(title) that has the word "for" in it "ChatGPT for teams" and not the HTML tags <title>ChatGPT Team</title> or <meta property="og:title">

In other words, people in this thread are not complaining about "ChatGPT Team" in the browser's tab title or confused by it. Instead, they're talking about the other title "ChatGPT for Teams" that was submitted and manually changed from lowercase 't' to uppercase 'T' and prominently visible at the top of this thread. That's the context of this meta discussion about confusing capitalization causing some readers to incorrectly assume the headline is about a new ChatGPT addon feature for MS Teams: (https://www.microsoft.com/en-us/microsoft-teams/group-chat-s...)

The singular "Team" in product name "ChatGPT Team" doesn't cause the same confusion because Microsoft doesn't have a branded product called "Team". That's why citing "<title>ChatGPT Team</title>" ... does not help clarify things and just adds more confusion to this thread.

>Hacker New's code uses the html title element (which in this case is actually a meta element) to automatically title submissions.

Are you sure? In the past, HN commenters have been annoyed that the HN software does not extract and parse the HTML <title> element automatically and defers too much to the titles that submitters manually type in which often leads to editorialization and/or confusion.


But that's how headlines work


The UK stopped capitalising every word in headlines back in the 1980s maybe earlier. It looks much nicer and is more readable. The bold tells you it is a geadline. US newspaper graphic design seems very retro to me.


It really depends on what style guide a news organization follows.

https://en.m.wikipedia.org/wiki/Title_case


The majority of designers I know prefer tile case. Whether it’s a printed poster or in-app UI.

Look no further than “Add Comment” at the top of this page.


What? It says "add comment"


The page title is “ChatGPT for teams”


I agree, but why didn't the original article do it? Or did they see the potential for confusion and deliberately not?

When a common word is a product / brand, how do you use that word in a title without bringing up associations with that product / brand?

Isn't the guidelines to submit the title as-is, no editorializing?


> but why didn't the original article do it?

Local style guide says not to capitalise headlines, presumably.

Though you are right, the general rule here is not to change headlines and that should probably apply to style as well as wording. Looking at other posts on the main list there seems to be a mix as the original styles (sentence cap, all words capitalised, all but articles, …) have been kept.


You could argue that Microsoft’s claim on common words (Teams, Windows) is the problem here.


You could, but I think that ship has sailed.


maybe Microsoft shouldn't have overloaded a generic word as a brand then.


You'll own no words and be happy.


Microsoft is the misleading one here for calling their generic chat platform "Teams".

Are you also going to complain if someone releases a platform called "The"?


Someone call Apple. I've been hungry for a long time


All this time i thought it was a microsoft feature, which is now confusing, won't microsoft integrate basically GPT4 in Teams?


Yeah, same here. Regarding that, this is essentially that (same LLM and all) https://techcommunity.microsoft.com/t5/microsoft-teams-blog/...


Or maybe you are spreading the anti ai ai club movement > https://twitter.com/daniel_nguyenx/status/174508587502795617... :D

j/k but finding it pretty funny these days that more and more people are switching to lowercase, assuming it started from this @sama tweet: https://twitter.com/sama/status/1735123080564167048


Maybe in the AI Twittersphere it started with sama, but it’s been becoming common in Gen Z culture for a few years now. If you look up popular bands at the moment, you’ll find a decent chunk of them only use lowercase in their band and song names.


I’m in gen z and usually type in all lowercase on my phone because I have autocorrect turned off and capitalizing the letters would be more work. Generally though, the aesthetic has become more associated with girls, I suspect because of women popular among girls being early to/pioneering the trend (namely Emma Chamberlain’s all lowercase youtube video titles and Rupi Kaur’s poetry books being entirely lowercase)


E.E. Cummings has all these Gen-Z hipsters beat by about a century give or take.



Yeah, but not in this context. Now it is being used with the anti-AI movement


It is a weird trend.


The phone only auto capitalizes the first letter of a sentence. If you end up changing it sometimes it doesn't get replaced with a capital. If you get frustrated with autocorrect features, often modern phones don't allow you to select which features to turn off and you're forced to turn everything off, at least that's how it was during the time i learned to type with the phone. I'm not gen z but I'd say it has a lot to do with it being much more cumbersome to capitalize a letter on the phone than the keyboard.

Plus, there's also the aesthetic, for a while we did it out of angst and to show we weren't all that wrapped up in the modern sensibilities. It seems like the situation as society moved forward may have only gotten worse for this.

If you're confused about something gen z does, it's either:

1) they're teenagers and grew up with phones, get with the times old man

2) a large corporation that didn't have a monolopy on communication (maybe basically everything else too) when you grew up now does and its an artifact of being forced to grow in that world

3) global warming is scary as shit, no one in power seems to give a shit, economies are crumbling, we're temporarily losing the worldwide battle for continued democracy, why wont they stop the war, school shootings, nuclear destruction isn't too close but doesn't look like it's getting farther away, etc etc etc. Basically, "have you seen the world? Why are you even trying, let alone the teenagers today?" type thing.


You can disable the option on phones to auto capitalize.

Again, my observation here is not that "TVs should stay grayscale" but that this lowercase movement is now being used to "fight the AI generated content" which I find pretty funny because it makes no sense. You can tell ChatGPT to write in lowercase.

It especially got traction after @sama's tweet that had no intention to associate it with non-GPT content, but just weirdly flexed that he types in lowercase.


Pervasive and baffling. I can't decide if lack of attention to name conflicts in marketing is the norm or if it's intentional.


Said chat platform explicitly calls itself "Microsoft Teams". You know that it is not being referred to because there is no mention of its name.


Ok, big noob question I never understood how to capitalize headlines correctly. Can someone eli5 ?:D


> I never understood how to capitalize headlines correctly. Can someone eli5 ?:D

On HN, copy/paste is usually a safe bet (with a few exceptions): https://news.ycombinator.com/newsguidelines.html

If you mean in general, Wikipedia sums it up pretty well: https://en.wikipedia.org/wiki/Title_case


> On HN, copy/paste is usually a safe bet

Also note that HN automatically applies the capitalization rules to submitted headlines. So you don’t really need to do any changes to the capitalization yourself when you submit.

And for understanding how the rules work, I agree that Wikipedia article is good.


Like a big chunk of the language, English capitalisation is nonsensical and descriptivist. Different organisations have different rules just like different audiences have different expectations, and there is no single way to capitalise words in titles.

I'm with the parent comment on this one: for this particular headline it makes sense to lowercase "teams", since the HN audience will tend to correlate the uppercase version with the incorrect meaning of the headline.


Generally, capitalize longer words. But this really depends on the style book you choose to follow (MLA, APA, CMOS). See: https://en.wikipedia.org/wiki/Title_case


In English, the most common convention is to capitalize every word, or every word except small ones like articles and prepositions (the, of, for).


In US English. Not in the UK.


That's region-specific. Certainly not a feature of the language.


It's a feature of the language dialect.


See also this nice title-case converter web app: https://titlecaseconverter.com/


At the end of the day I wonder what openai's endgame is here. They're starting to expand their business in a way that geometrically grows the size of the team, overlapping products that microsoft is offering, making the whole non-profit/capped-profit thing a head scratcher.

I guess you can argue this is just a marginal add-on to their existing ChatGPT product but I can imagine seeing them go full Salesforce/Oracle/enterprise behemoth here.

I would say I'm very pro AI development and pro Sam reinstating but I've been starting to shake my head a bit. Their mission and their ambition are wildly different.


It’s pretty obvious that once they realized how much money was on the table, the “non-profit” aspirations and goals went out the window. The Altman saga from a few months ago painted this clearly.


How much money is on the table?


GDP bigger than that of the web.


I would assume the endgame is like Microsoft: to become an OS for your org. Knowledge management, human augmentation (code, emails, copilot all the things), data analytics, workflow automations, etc.

The mission changed when research ran into product market fit.


I kinda see it differently. There are these incredible use-cases for what they can do with this technology but still requires massive R&D and politics. They're taking the path of least resistance with these features. They should spin off R&D and let another division handle this low-hanging fruit garbage. But, maybe this is just how a business cycle works. You get a bite and you milk it for what it's worth and let the next generation organization take it to the next level.


This is a way to create moat, you think Zoom can survive open source with just bland features serving moms and pops? That’s how you get them to stay out of open source


> At the end of the day I wonder what openai's endgame is here

Sell AI products to fund making AI


At some point I think there is a conflict though, the more powerful the AI models, the more risk there is to their own business.


Not if they move fast and build that moat to maintain their castle as the only provider of highest-quality models' services.


The Engineering example is absolutely hilarious. Sure, I’m going to copy paste my code into an AI assistant to ask it about a bug that a linter would spot in realtime as I wrote the code.


I agree with you completely, but the target audience of people who will do such a thing have no clue what a linter, lexer, or parser are. Maybe even a compiler. And that audience is much larger than us folks at the ripe old age of 25+ even realize.


meh


meh, openai is dead. just use https://www.forefront.ai/ or chatbotui.com with mixtral, invite team, done


Mixtral is nowhere near gpt4. Forefront uses OpenAI or Anthropic (whose models aren't fantastic either).




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: