Hacker News new | past | comments | ask | show | jobs | submit | loueed's comments login

Apple's chips are based on the ARM architecture, which is inherently more power-efficient than the x86 architecture used by AMD and Intel for their laptops. Apple also designs these chips in-house.

Apple is using a combination of high-performance and high-efficiency cores (Big.LITTLE architecture), allowing the chip to balance performance and power usage based on the task at hand. They also include dedicated hardware for specific tasks (like video encoding/decoding), which can be more efficient than performing these tasks on general-purpose cores.

AMD and Intel design chips for a wide range of devices and systems, requiring a more generic approach. This limits their ability to optimise for specific hardware and software combinations. Apple designs chips solely for its own products, allowing for tighter integration between chip, hardware, and software.


I understand that Apple controls much more its ecosystem than either AMD or Intel, but I cannot imagine a M1/2/3 being less "generic" than an AMD/Intel, care to elaborate?

Also, the question is what makes the architecture more power hungry. Originally there was this disctinction of CISC (intel/amd) vs RISC (arm/apple) but my understanding is that at least intel/amd have some cisc runnning on risc architecture, meaning, externally it looks like a cisc/x64, but internally is a risc or riscish.


Sony has a monopoly and limited production on MicroOLED, which is why the displays are the most expensive part of Vision Pro.

Apple is reported to be testing displays from BOE, Apple will be able to negotiate better prices once Sony loses their monopoly.

If Vision Pro 2 is planned for 2027, we might see Apple use Samsung eMagin OLED.


BOE and Samsung and Japan Dispay are in market or ramping now.


Pure Vision currently. The last we heard about LiDAR was the issue of combining Lidar Data with Vision in the NN. Which technology should take priority in the NN? For example LiDAR might recognise the shape of a sign but vision will know if its a stop sign or not.

I'd be super interested to see if someone has successfully combined RGB data with LIDAR data.


> combining Lidar Data with Vision in the NN

This is basic university robotics. Sensor fusion ! There are a whole host of techniques for dynamically updating confidence between two or more sensors estimating the same values. Kalman filter being the standard approach (which for reference was used in the apollo missions 50 years ago - developed a lot since then)

And yes, it is commonly applied with vision models. There are a host of combined rgb/lidar, structured light, depth camera and more setups in the labs students are working on at my local university, and have been for at least 6 years


For reference, I, a computer science undergrad, learned this kind of sensor fusion theory in an elective class that was just an excuse for a soon to retire professor to play with lego robots

You can know more about sensor fusion than Elon does by reading a literal blog post.

The NN shouldn't have to "choose" one or the other. It's a classic "Not even wrong" question!


What does NN stand for?


neural network, i would imagine.


Neural network i guess


Neural network


mailerlite.com has a free tier for 12,000 emails per month but limited to 1,000 subscribers.

If you don't mind writing a bit of javascript, you could use Cloudflare workers which allow sending emails via MailChannels API. Be sure to setup certificates so people cant spoof your domain.


I also have a bookmarklet for quick notes.

> data:text/html,<html contenteditable><body style="margin: 10vh auto; max-width: 720px; font-family: system-ui"><h1>New Note

I added some basic styles so I can screenshare :D

Also in most browsers, CTRL + B, I and U work in contenteditable divs.


From what I've seen, they include a fallback set of gear shifter buttons below the centre console. These work even if the screen is black.

I do see a lot of praise for its ability to auto shift, basically it should predict the direction of the vehicle based on the surrounding environment.


Interesting post, I've not used YAML outputs as of yet. When using GPT3.5 for JSON, I found that requesting minified JSON reduces the token count by a significant amount. In the example you mention, the month object minified is 28 tokens vs 96 tokens formatted. It actually beats the 50 Tokens returned from YAML.

It seems like the main issue is whitespace and indentation which YAML requires unlike JSON.


Yes, minified JSON would be even less tokens than YAML. But: 1- LLMs tend to have very hard time to produce minified (compacted) JSON in the output, consistently. 2- As for compacted JSON input- Empirically it seems that LLMs can process it quite well for basic cognitive tasks (Information Retrieval, basic Q&A, etc), but when it comes to bit more sophisticated tasks it fails compared to exactly the same input, uncompressed. I've mentioned and provided examples in the comments of this article.


What do you enjoy most about software dev? Open source is great for basically any area of software development, you could create some PRs to existing projects you find interesting. I'm a frontend dev but recently enjoyed learning about Deno and built a tiny blog using their Deno KV service, I'm now trying to built a super basic sandbox like codepen using Cloudflare Workers and KV.

If you're not a fan of serverless, fly.io or render.com are great for hobbyist projects.

Also ChatGPT can be great to help work out a project idea.


I made an iOS Shortcut for this so I can ask Siri for the QR code when needed. There's a built in "Generate QR Code" action that can take a text action containing the wifi string.

Only issue is hard coding the password in the shortcut.


Great work! Any idea why the import isn't resolving in a Vue app? `Cannot find module 'thememirror' or its corresponding type declarations`


Thanks! The `thememirror` package is ESM only, so maybe that could be it?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: