Hacker News new | past | comments | ask | show | jobs | submit | dddrh's comments login

This is a special article. The genius of the invitation to an unprepared lunch highlights the beauty of cooking and cooking together.

The description of the authors cooking with Judith is both intimate and distant. Personal and communal. And yet while silent, speaks to a deep friendship built on years of experience and taste and creativity.

My favorite aspect of the article is the lack of recipes. Meals can be whimsical and unexpected and memorable when the ingredients are there but the plan is not.

I’m sure I’ll be thinking of this article for years to come.


[flagged]


It may be. But it is first and foremost a touching personal story about meeting an author’s hero and becoming friends with them. And in today’s ad-ridden Internet this is one type of promotion I hope to see more often. Not algorithmic garbage thrown in your face by a big corporation after sucking up every last bit of your personal data, but a story written by the people for the people. Is it wrong to mention a book about the person described at the end of the story?


[flagged]


I am not involved in this, or any other promotion. Moreover, I don’t see how my tendency to lurk and not comment is relevant here, but if you really want to know, the description of the woman in the article reminded me of someone I personally knew, which may be the reason it touched me enough to reply today.

I just feel that there’s no reason to be combative in regard to the featured article even if you abhor any and all types of promotion, given that there’s so many much worse offenders around.


This kind of pot-stirring is against HN rules; it is quite unpleasant to be on the receiving end of such accusations in public. Email your suspicions to the mods, and let them nuke the accounts that are abusive; but let's not sour the flavor of HN for everyone else reading this. The "is that guy secretly a shill" game gets stale *fast*.


All I did was ask if they were involved, you accusing me of secretly pot-stirring a secret pot is against HN rules.


> But it is first and foremost a touching personal story

Does any recipe, cookbook, or general article about food on the internet NOT fall into this category??


Serious Eats has articles before the recipe that are usually full of technical information from the development of the recipe.

Sometimes there's a bit of the "touching personal story" but I'm a lot more used to seeing failures and tests in the before-recipe section there. As a random example, check out this page on poached chicken:

https://www.seriouseats.com/how-to-poach-chicken-recipe-8641...


Most of the cookbooks I’ve read were relatively straightforward, but those were mostly older books not written in English. That may be just me not reading a lot of recipes in general.

On topic — I would say that this article not being a recipe is important in that case. The story is not something detracting from the main point, it is the point.

Also when I was saying that I’d like to see more of this type of promotional content, I meant that just mentioning you are writing the book on the topic at the end of the article (without even linking to it) is vastly superior to pop-up videos tracking you across websites. I did not mean that the Internet somehow needs even more advertising in it.


It's a very common psychological trap to fall into, so all recipe sites have turned into "fake touching personal story" content mills over the past decade or so, yes.


I don’t think it’s a “trap”.

Recipes are not copyrightable (in the US not sure about elsewhere)

But a story with a recipe is. Creators are trying to protect their income first and foremost


LLM generates stories aren't copyrightable either, and I doubt ad-financed clickbait farms would care about suing each other anyway, this is just SEO and audience manipulation.


Use a tripod and use a remote or set a 2 second delay on your shutter press if you start using longer exposures at low iso (anything longer than 1/10 of a second IMO)

Settings really depend on the effect you are trying to capture: - if you want to try and capture it like your eye sees it: high iso (as high as you can stand) and aim for between 1/500 and 1/30. - if you want to capture a more painterly look with the whole sky colored in: low iso (I’d start at 200) and long exposure >3 seconds. Probably between 3 and 30 seconds depending on the available light.

Those should get you started to experiment and find what you like. Enjoy!

Oh yea. Turn off the focus light, turn off the screen, and turn off the red blinking indicators. Turn all the lights off so you can preserve your night vision.


Thanks, I got some good pictures! AF is def. a problem ...


I've been debating on responding here and well you can see my decision has been made. The caveat is that this response is also biased on my personal experience so your milage may vary.

But for anyone reading this who is adjacent to a close friend or relative or even a stranger that is experience traumatic loss the grieving process is a messy thing. No one experiences it the same way. Second-hand grief is similar.

So rather than "try to fix it" by saying anything, say nothing, and just be present. Just sit. That says more than words. And if you can't be there, notes of "You are on my mind" are good too.

There is no fixing grief, only going through it.


This reminds me of “The Wise Mind” from DBT sessions.

To find the balance between emotion and reason for wisdom.

https://www.therapistaid.com/therapy-worksheet/wise-mind


Interesting read.


Interesting concept that raised the question for me: What is the primary limiting factor right now that prevents LLM’s or any other AI model to go “end to end” on programming a full software solution or full design/engineering solution?

Is it token limitations or accuracy the further you get into the solution?


LLM's can't gut a fish in the cube when they get to their limits.

On a more serious note: I think the high-level structuring of the architecture, and then the breakdown into tactical solutions — weaving the whole program together — is a fundamental limitation. It's akin to theorem-proving, which is just hard. Maybe it's just a scale issue; I'm bullish on AGI, so that's my preferred opinion.


Actually I think this is a good point: fundamentally an AI is forced to “color inside the lines”. It won’t tell you your business plan is stupid and walk away, which is a strong signal that is hard to ignore. So will this lead to people with more money than sense to do even more extravagantly stupid things than we’ve seen in the past, or is it basically just “Accenture-in-a-box”?


AI will absolutely rate your business plan if you ask it to.

Try this prompt:"Please rate this business plan on a scale of 1-100 and provide buttle points on how it can be improved without rewriting any of it: <business plan>"


I agree that AI is totally capable of rating a business plan. However, I think that the act of submitting a business plan to be rated requires some degree of humility on the part of the user, and I do doubt that an AI will “push back” when it comes to an obviously bad business plan unless specifically instructed to do so.


I wouldn't trust an absolute answer but it can help you generate counterarguments that you might miss


> LLM's can't gut a fish in the cube when they get to their limits.

Is this an idiom? Or did one of us just reach the limits of our context? :P


Office space reference.


I guess this would be the context window size in the case of LLMs.

Edit: On second thought, maybe at a certain minimum context window size it is possible to cajole the instructions in such a way that you at any point in the process make the LLM work at a suitable level of abstraction more like humans do.


Maybe the issue is that for us the "context window" that we feed ourselves is actually a compressed and abstracted version - we do not re-feed ourselves the whole conversation but a "notion" and key points that we have stored. LLMs have static memory so I guess there is no other way as to single-pass the whole thing.

For human-like learning it would need to update it state (learn) on the fly as it does inference.


Half baked idea: What if you have a tree of nodes. Each node stores a description of (a part of) a system and an LLM generated list of what the parts of it are, in terms of a small step towards concreteness. The process loops through each part in each node recursively, making a new node per part, until the LLM writes actual compilable code.


Isn't that what langchain is?


See https://github.com/mit-han-lab/streaming-llm and others. There's good reason to believe that attention networks learn how to update their own weights (Forget the paper) based on their input. The attention mechanism can act like a delta to update weights as the data propagates through the layers. The issue is getting the token embeddings to be more than just the 50k or so that we use for the english language so you can explore the full space, which is what the attention sink mechanism is trying to do.


Memory and finetuning. If it was easy to insert a framework/documentation into GPT4 (the only model capable of complex software development so far in my experience), it would be easy to create big complex software. The problem is that currently the memory/context management needs to be done all by the side of the LLM interaction (RAG). If it was easy to offload part of this context management on each interaction to a global state/memory, it would be trivial to create quality software with tens of thousands of LoCs.


It is the fact that LLM's can't and don't try to write valid programs. They try to write something which reads like a reply to your question, using their corpus of articles, exchanges etc. That's not remotely the same thing, and it's not at all about "accuracy" or "tokens".


The issue with transformers is the context length. Compute wise, we can figure out the long context window (in terms of figuring out the attention matrix and doing the calculations). The issue is training. The weights are specialized to deal with contexts only of a certain size. As far as I know, there's no surefire solution that can overcome this. But theoretically, if you were okay with the quadratic explosion (and had a good dataset, another point...) you could spend money and train it for much longer context lengths. I think for a full project you'd need millions of tokens.


I see someone riding one of these around my neighborhood in what is essentially full motorcycle gear: - full helmet with closed visor - armored jacket with elbow pads - knee pads - all high vis

Still doesn’t look like it’s enough while they cross a street in front of turning cars.


My reaction to understand this: if the cost of manufacturing something is 7x in California compared to the rest of the USA then it’s easier to just not sell it in California.

But having not read anything besides HN comments yet this I don’t expect this to be the reality of the bill, only the reaction to the headlines.


Not sure if it's still the case but there was a time when I would browse aftermarket performance car parts and a number of them could not be sold in California.

I suspect there will be some products that won't be available in California in the future. But there will be many companies that adapt and stay in the California market.


"Editor" must be their AI writer for sports journalism.

From the bottom of the article:

---

Regardless that shedding Kelce even for one recreation is a devastating blow for the Chiefs offense, Mahomes has proven what he can do when one among his go-to guys goes down or is not with the crew.

After the Chiefs traded celebrity huge receiver Tyreek Hill to the Miami Dolphins earlier than final season, the consensus was Kansas Metropolis wouldn’t be the identical on offense, however that didn’t change into the case.

The Chiefs have confirmed they’ll get by and succeed with out a few of their star gamers, they usually might have to do this in Week 1, particularly if All-Professional defensive deal with Chris Jones continues his holdout.

Will probably be a tricky stretch proper out of the gate for the defending champions as they’ll have to determine tips on how to get the most effective of a hungry Lions crew whereas shorthanded.

--- Edit: Funny enough it looks like that any words that have synonyms are being replaced without context: - Kansas Metropolis Chiefs / Kansas City Chiefs - ...celebrity tight finish Travis Kelce / Tight End - the whole soccer world tunes into as it’ll start the 2023 NFL season. // football world


My biggest issue with Threads is that search only searches profiles.

What was Twitter’s “magic” to me was that search was for tweets.

So you could hear about a live event happening and in near-time find information about it and people who were there or knowledgeable about the event.

My first exposure to this was the Atlanta Gas Shortages in 2008 and it was when the hashtag really came into its prime as a way to quickly find relevant content.

Threads has none of that. Try searching for the “Tour De France”. This morning when I did that I only got two profiles. This kills discoverability and the ability to follow the zeitgeist.

If Threads changes search to index content then it could work. But until then it’s just bland with its attempt at using an algorithm to spoon feed you content without your ability to control what you want to see.

Shameless link to the thread where I first put these thoughts down: https://www.threads.net/t/CuW77iEuquK/?igshid=MzRlODBiNWFlZA...


It’s a bit old but still running: Touch Arcade^ has a good mix of deep dives and “out this week” with a pretty active forum for new mobile (iOS and android) and switch games.

I still find myself at least considering the weekly features or looking at what they curate for “out this week”.

^ https://toucharcade.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: