This is an absurd way to think of this. Following this same train of thoughts for humans:
The business logic for humans is a single reproductive cell.
A single sperm weighs 2.3 x 10^-11 grams. If the average male weighs 75kg the. The bloat ratio for a human male is 3.2x10^15
Getting back to the app there is huge value in not needing to run the command yourself. Sure it’s wrapped in a UI that comes with “bloat” but honestly who cares. When was the last time someone needed to worry about hard drive space, when it comes to a 40mb file.
Well, the apps often come bundled with a bunch of other stuff. Automatic updates, background workers, telemetry …
All of which sucks up your compute resources and battery. Repeat for every such little utility app you have on your Mac. Some may implement that random stuff inefficiently (eg very frequent telemetry), which sucks even more. Some of it may even be wrong, vibe coded, or copy pasted.
Personally, puts me off installing random utility apps, even if the single utility would be useful.
In the human analogy, the human has to be the entire computer too. It's all functional, not much bloat. For the app, the computer is external. It really is bloat.
I’ve noticed that many of the hard boards have a “choke point” in the solution where there are a small number of places you can place a specific queen but the information needed to place it can be difficult to gather.
A hacky trick is to place a queen and then ask for a hint. If deployed well, you can use this short circuit the challenge and often solve the puzzle in a fraction of the time.
There isn’t any penalty for taking a hint beyond you’re locked out of your next hint for a period of time.
Oh wow! Growing up my chemical engineer uncle would come out on the Fourth of July and dump a bucket of stuff on the road in front of his house. A while later when it was dried he'd have us roller blade and skate board down the road to setup all the little explosions. It was a total blast. He refused to tell anyone what the compound was, but assured us it could be easily made. It has to be this stuff.
This stuff would go off immediately if you touched it.
I used to make it with my friends with household ammonia and medical iodine. We mixed them and then filtered through paper. Then this brown powder would explode after it dried if we touched it just lightly.
It's crazy that something so unstable can even exist at RTP. Like it's interesting how it threads the line between so-unstable-you-can't-make-me, and stable-enough-to-make-but-decompose-under-a-stern-glance. Universe has a sense of fun, NI3 is tuned specifically for existence as a prank substance.
I'm not a chemist, but the article talks about it being prepared in a solution of ammonium hydroxide which can be poured on a floor and left to dry. Surely this is what the GP is comparing their memory to, and not a bucket of pure nitrogen trioxide.
Models are only as good as the data that is fed into them. OpenAI is paying Reddit 70M for access to the data. So the real value here is the conversations, not the model.
And in the case of Telegram, you will get very intimate data about people. You know in real time who they are talking to, what they are talking about, etc... Its extremely valuable data
I think this is more the case. xAI is looking for more data to ingest.
I'm not sure how the integration will work with Telegram if the contents are supposed to be "secure". Are you just allowing your conversation to get exfiltrated to xAI? Does the other party you are talking to get a say in that?
I can't help but think this means that all my telegram conversations will now be fed into grok. I'm curious how I'd go about verifying that this is or isn't going to be the case.
I'm thinking about most of my communications on Telegram and it's kind of hilarious to imagine it suddenly starting to reply to people on Xitter with unsolicited horny furry roleplay instead of unsolicited white supremacist rants.
Not so hilarious that it doesn't make me want to consider trying to convince all my circles to move to a community-run Matrix server or something though.
Yeah, sounds like they finally accepted that tweets don't make LLM smarter, only less biased, and so they're doubling down to feed more tweet-like data to make it smart and biased.
Did you have an expectation of privacy using Telegram? I think that's the real issue here. If you were paying for Telegram, then I would say yes, those conversation should not be handed over to xAI. But, if you were ostensibly not paying for it, then I think it should be assumed that use of the service is considered consent to take that and use it as they please.
So yeah, it's almost certain that everything you ever put into Telegram is in Grok.
SMS is never claiming to be E2E nor is any army of SMS defenders online talking about how some virtue of SMS is almost same as E2E. While I don't like Telegram I will happily admit it is better than SMS, but how is that an argument for anything.
i use both. I think Gemini produces longer more complicated answers. ChatGPT is more succint, but it could be b/c I've trained ChatGPT how to talk to me.
The context window difference is really nice. I post very large bodies of text into gemini and it handles it well.
To what end? To establish a healthy society where people are free to explore and take risks.
It’s not that complicated. Remove the risk of failure by giving people money for food, a place to live, an education, and protect their health.
There is a reason that many successful founders came from money. That kind acted as a safety net that allowed them to bet big. Not everyone with money goes on to establish a big business but the rate of success is far higher for those with money.
If you are explicitly calling authenticate() for each api, you’re doing it “wrong”. At that point you want implied authentication not explicit authentication. Why not move it to some middleware that gets called in every api call?
No that’s not how this works. You register the middleware with your web framework and it gets called as part of all web requests before the request hits your endpoints. This allows you to trust that authentication has been called for all api calls
> Because then you are calling middleware_caching_auth_broker() from 37 places
No you aren't. You aren't really calling it from anywhere. The framework you're using, which you aren't writing, is calling the registered middleware.
The topic here is complexity for the code structure because it's called from 37 different places. A registered middleware doesn't run into that issue because it doesn't get called anywhere that "code structure complexity" matters.
Your reasoning is isomorphic to "I'm calling a bit shift millions of times because I have written some code in a programming language." Technically true but not what we're talking about here.
honestly cleaning up the Readme and documentation would go a very long way, right now all the information feels fragmented behind all of the little pages. I clicked into the documentation and clicked the first link presented to me on each page and 5 clicks or so in I was on the command line docs but I hadn't seen anything that gave me a high level overview of what git-bug is, what it does, why I want to use it, etc...
I understand that documentation can be hard and you need docs for newbies and long time users, but as a newbie I cannot for the life of me figure out what this is.
Think of it as an LLM that automagically pulls in context from your working directory and can directly make changes to files.
So rather than pasting code and a prompt into ChatGPT and then copy and pasting the results back into your editor, you tell Claude what you want and it does it for you.
It’s a convenient, powerful, and expensive wrapper
Not the OP but in a similar boat. Curious to understand how do you make sure that Claude code cli tool does not break existing code functionality?
My hesitation to adopt stems from the events where Claude.ai WebUI ignorantly breaks the code, but since I can visibly verify it - I iterate it until it seems reasonable syntactically & logically, and then paste it back.
With the autonomous changing of the code lines, I'm slightly nervous it would/could break too many parts concurrently -- hence my hesitation to use it. Any best practices would be insightful
Claude Code will happily and enthusiastically do horrible things to your code. But it always asks first. So you can tell it "NO!" and suggest a better approach.
Imagine having a college sophmore CS major who types really quickly and who is up-to-date on lots of new technologies. But they're prone to cutting corners when they get stuck, and they have never worked on anything larger than a group project. Now imagine watching them as they work (really quickly) and correcting them when they mess up.
This is... tolerable for small apps. If you have problems that could be solved by a team of very junior programmers, and if you're willing to provide close supervision, then it might even make sense for some real code. Or if you kind of know how to code, and you just need little 1,000 line throwaway tools (like a lot of other STEM fields), eh, it's probably OK.
But your mentoring effort will never result in the model actually learning anything, so it's more like you get a new very junior programmer for each PR.
I don't want to completely badmouth this. For very early stage startups where you need to throw 50 things at the wall, most of them glorified CRUD apps, and see what sticks with the customers, then a senior engineer could make it work. But if you have a half dozen people who only sort of know how to write code all "mentoring" Claude, then your code base will become complete trash within two weeks. In practice, I see significant degradation above 1,000 lines for "hands off" operation, and around 5,000 lines if I'm watching it intensely and carefully reading all code.
The business logic for humans is a single reproductive cell.
A single sperm weighs 2.3 x 10^-11 grams. If the average male weighs 75kg the. The bloat ratio for a human male is 3.2x10^15
Getting back to the app there is huge value in not needing to run the command yourself. Sure it’s wrapped in a UI that comes with “bloat” but honestly who cares. When was the last time someone needed to worry about hard drive space, when it comes to a 40mb file.
reply