Hacker News new | past | comments | ask | show | jobs | submit | tobyhinloopen's comments login

Stop hijacking scrolling. Why would you do that? What developer thought this was a good idea?

I think the main issue is that designers and web "developers" do not use their own crap.

The scrolling I didn't find too off putting, but that floating nav bar is beyond awful; I had to Inspect -> Delete Element to be able to read the article.

the LLM.

The corrections are just metadata, the RAW data is still there. This is true for both DNG and ARW (Sony). Dont know the other brands. The corrections can even look different based on what program you use to interpret them.

I don’t think that’s true in general. As a sibling comments points out, this is not true for some DNGs - for example, the output of an iPhone is in DNG, but with many, many transforms already baked in. A DNG might even be debayered already.

GFX 100s II’s apply a transform to RAW data at iso 80, see: https://blog.kasson.com/gfx-100-ii/the-reason-for-the-gfz-10...

I don’t know much about ARW, but I do know that they offer a lossy compressed format - so it’s not just straight off the sensor integer values in that case either.


Okay true, but that's not the format's fault (:

The GFX 100s II thing is very interesting. Totally not what I would expect from such a "high end" camera.


damn, that is a quirk that would've led me pulling my hair out if I worked with those.

At least it's only at ISO 80, where noise would be minimal anyway (: I rarely use noise reduction because I don't like the artificial cleanliness of the result.

It's all float value arrays with metadata in the end. Most camera sensors are pretty similar and follow common patterns.

DNGs have added benefits, like including compression (optional) (either lossy or lossless) and error correction bytes to prevent corruption (optional). Even if there's some concerns like unique features or performance, I'd still rather use DNGs without these features and with reduced performance.

I always archive my RAWs as lossy compressed DNGs with error correction and without thumbnails to save space and some added "safety".


Nitpicking correction: The sensors give you a fixed number of bytes per pixel, like 10 or 12 bits per pixel. This are unsigned integers, not floats.

Typically you want to pack them to avoid storing 30% of zeros. So often the bytes need unscrambling.

Any sometimes there is a dark offset: In a really dark area of an image, random noise around zero can also go negative a little. You don't want to clip that off, and you don't want to use signed integers. So there typically is a small offset.


Sometimes it just randomly prompts about the content guidelines and the next day it will do it perfectly right away. Maybe you just had a wrong moment in time, or maybe it depends on the random server you're assigned.

No, it first generates the image and then another completely different component checks the image for adherence to the guidelines. So it's not your prompt that violates the guidelines, but the resulting image (which is different every time for the same prompt)

Every time I try to increase the performance of my software by using multiple cores, I need a lot of cores to compensate for the loss of per-core efficiency. Like, it might run 2-3 times as fast on 8 cores.

I'm sure I've been doing it wrong. I just had better luck optimizing the performance per core rather than trying to spread the load over multiple cores.


Or your task needs the overhead to sync and read/write data. Only you can tell really with access to code/data, but 3x speed on 8x cores may well be the theoretical maximum you can do for this specific thing.

Synchronization overhead is more than people think, and it can be difficult to tell when you're RAM/cache-bandwidth limited. But it makes a difference if you can make the "unit of work" large enough.

Cruelty is the point. Cruelty is what the American people want.

The American people are not victims of a dictatorship, they voted for this.


I have no clue what I'm supposed to do, am I stupid?


Only white moves. Eat black. Go to green dot.


welcome to life. stay calm & use only as much energy for work as necesary, use what's left for fun.

whenever given, read instructions carefully.

capture as many capturables as possible. get to the light at the end last.

PS: it's a knight and can only move like that one hockey club shaped Tetris block


Thank you for your valuable life advice! I figured it out


That's a completely reasonable boundary. Privacy and consent are critical, especially when sharing personal messages or conversations. It's fair to expect that your interactions remain private unless you've explicitly agreed otherwise. If you'd like, you can communicate your stance clearly to others in advance, ensuring they're aware of your boundaries regarding the use of your messages with AI tools or other external resources.


I understand why one would think it's funny to feed the parent comment into an LLM but please at least label when you echo such output on the site


I don't think their main concern was the privacy aspect.


What do you think their concern was? I can't see any other issues someone might have.


Energy usage is another. What would happen to world power consumption if 1% of WhatsApp chats would be fed to ChatGPT?

A third reason besides privacy would be the purpose. Is the purpose generating automatic replies? Or automatic summaries because the recipient can't be bothered to read what I wrote? That would be a dick move and a good reason to object as well, in my opinion


> What would happen to world power consumption if 1% of WhatsApp chats would be fed to ChatGPT?

The same thing that happens now, when 100% of power consumption is fed to other purposes. What's the problem with that?


Huh? It's additional power draw in the midst of an energy transition. It's not currently being used differently. What do you mean what's the problem with that?

Also don't forget it's just one of three aspects I can think of off the top of my head. This isn't the only issue with LLMs...

Edit: while typing this reply, I remembered a fourth: I've seen many people object morally/ethically to the training method in terms of taking other people's work for free and replicating it. I don't know how I stand on that one myself yet (it is awfully similar to a human learning and replicating creatively, but clearly on an inhuman scale, so idk) but that's yet another possible reason not to want this


If people need additional power, they pay for it. If they want to pay for extra power, why would we gatekeep whether their need is legitimate or not?


Because of the aforementioned shortage. Paying for more power means coal and gas gets spun up since there aren't enough renewables, and the externalities aren't being paid for by those people

I'm also happy to have them pay for the full cleanup cost rather than discourage useless consumption, but somehow people don't seem to think crazy energy prices are a great idea either

Also you're still very focused on this one specific issue rather than looking at the bigger picture. Not sure if the conversation is going anywhere like this


What's the bigger picture? You said "power usage", "to what purpose?" (you kind of don't get a say in whether I use an LLM to reply to you, though you're free to stop talking to me), and "objections to the training method", which doesn't really seem relevant to the use case, but more of a general objection to LLMs.


This is incredibly cool, thank you


If you're navigating on a 2D grid, Jump Point Search "Plus" can be extremely fast:

https://www.gameaipro.com/GameAIPro2/GameAIPro2_Chapter14_JP...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: