Hacker News new | past | comments | ask | show | jobs | submit | dlevine's comments login

I’m job searching right now, and it’s definitely not the doom and gloom that I’m hearing about.

I think it’s a challenging environment for developers who are either inexperienced or who have skills that are out of date. I have found that companies are a bit more picky than i remember about knowing the exact tech stack they use, but they are still making offers and those offers are pretty good.

Note that I’m applying mostly to mid-sized non-public companies. I’m not sure what it’s like applying to MAANG-types right now.


I'm in loops at multiple FAANGs right now and it's not even that bad. Feels similar to when I did this last time? But I do get the impression that they're all just throwing out any resumes that don't already have a FAANG on them. If you're already in the club then it's ok, but I think it's harder than ever to join the club so to speak.

this is EXACTLY where everyone should be applying… too many people are nuts to work at faang, I can live 50 lifetimes and never understand it.

When the FAANGs stocks were soaring and they handed out options like candy, even low level devs at those tech companies were making the kind of money usually reserved for medical doctors and attorneys. There was a huge gulf between SV pay and pretty much everywhere else.

Now, those companies have mostly lost the luster they used to have, and the comp differences aren't as large.


The last time any faang gave options to low level employees was probably 2009 or earlier

I think of LLMs like smart but unreliable humans. You don't want to use them for anything that you need to have right. I would never have one write anything that I don't subsequently go over with a fine-toothed comb.

With that said, I find that they are very helpful for a lot of tasks, and improve my productivity in many ways. The types of things that I do are coding and a small amount of writing that is often opinion-based. I will admit that I am somewhat of a hacker, and more broad than deep. I find that LLMs tend to be good at extending my depth a little bit.

From what I can tell, Sabine Hossenfelder is an expert in physics, and I would guess that she already is pretty deep in the areas that she works in. LLMs are probably somewhat less useful at this type of deep, fact-based work, particularly because of the issue where LLMs don't have access to paywalled journal articles. They are also less likely to find something that she doesn't know (unlike with my use cases, where they are very likely to find things that I don't know).

What I have been hearing recently is that it will take a long time for LLMs will be better than humans at everything. However, they are already better than many many humans at a lot of things.


I think there are a couple of truths to this.

1. Any low hanging fruits that could easily be solved by an LLM easily probably would have been solved by someone already using standard methods.

2. Humans and LLMs have to spend some particular amount of energy to solve problems. Now, there are efficiencies that can lower/raise that amount of energy but at the end of the day TANSTAAFL. Humans spend this in a lifetime of learning and eating, and LLMs spend this in GPU time and power. Even when AI gets to human level it's never going to abstract this cost away, energy still needs spent to learn.


The name SheepShaver is a play on ShapeShifter, which was a Mac II emulator for Amiga. I remember running an Amiga Emulator (UAE) AND Shapeshifter on top of that since it was the best Macintosh Emulation at one point in the late 90s.


RIP to the other Mac ShapeShifter: https://web.archive.org/web/20060613064350/http://unsanity.c...

There was a whole community around theming OS X in the aughts, chock full of talented designers. It all depended on ShapeShifter. MacThemes2.net was bustling, along with self-important invite-only communities like macristocracy.com.

Good times.


Gosh, I remember that so well and fondly miss it. Was a wild, whimsical time for sure. They made other fun "haxies" (as they called them) too, but ShapeShifter was always the coolest. To this day I remember my favorite was the green Aluminum Alloy theme; I liked Milk too. Somatic was wild as well!

Found a list with screenshots for those curious: https://www.mac-dvd.com/cool-mac-theme.html

I rocked CrystalClear Interface (CCI) from Musings from Mars for a good while too.


Fun fact: ShapeShifter was virtualized, not emulated, so software ran at full speed. I used it to play Myst and other games on my Amiga. Its 68060 CPU was faster than any real Mac, so programs were screamingly fast on it.


SheepShaver was originally virtualised and not emulated. The original version ran MacOS in a virtual machine on PowerPC BeOS (BeBox and Mac that ran BeOS.) Because of this it ran at full speed and was akin to the Classic environment on early MacOS X. I used to use it to run IE5 as the BeOS browser was quite poor and PowerPC never got any ports of more modern browsers. It ran really well and could be made full screen.


I’ve got it on my Amiga 500 with Vampire 500 accelerator (so-called 68080 MMX). Pretty sure that’s nearly the fastest native 68k Mac, even if it’s an FPGA.


PiStorm is faster but then you get into nuances of arguing over if pin compatible emulation counts or not, and if not, why would an FPGA emulation.


For a very short period of time and circumstances (Mac PPC was out but some Photoshop plugins were m68k only) an Amiga with SCSI and a 68060 was the fastest Photoshop Mac. :)


I read it as something completely different scanning HN


I have been playing around with MCP, and one of its current shortcomings is that it didn’t support OAuth. This means that credentials need to be hardcoded somewhere. Right now, it appears that a lot of MCP servers are run locally, but there is no reason they couldn’t be run as a service in the future.

There is a draft specification for OAuth in MCP, and hopefully this is supported soon.


For the OAuth part, the access_token is all an MCP server needs. So users could do an OAuth Authorization like in the settings or by the chatbot, and let MCP servers handle the storage of the access_token.

For remote MCP servers, storing access_token is a very common practice. For MCP servers hosted locally, how to deal with a bunch of secret keys is a problem.


There's open source package that allows delaying providing credentials to MCP server to runtime / via MCP tool call: https://github.com/supercorp-ai/superargs

For hosted MCPs: https://supermachine.ai


You could use Nango for the OAuth flow and then pass the user’s token to the MCP server: https://nango.dev/auth

Free for OAuth with 400+ APIs & can be self-hosted

(I am one of the founders)


There are remotely run MCP server options out there, such as mcp.run and glama.ai



I have found feature flagged rollouts to be one of the biggest advances in fairly recent software development. Probably too much to say about it in a comment, but they massively de-risk launches in a number of important ways, both in being able to quickly turn a feature off if it has unintended consequences and being able to turn it on for a very specific set of users.

With that said, I think that LaunchDarkly and the like are a bit expensive and heavyweight for many orgs, and leaving too many feature flags lying around can become serious debt. It totally makes sense to start with something lighter weight, e.g. an env var or a quick homegrown feature in ActiveAdmin.


If one of these services was able to catalog commits with the FF, that's be worth gold (wink wink Launch Darkly).


About a year later, I got the P3-550 that overclocked to 733. Not quite as good of an overclock in terms of percentages, but I ran that machine for 5 years with no issues.


I played Disco Elysium when it came out and enjoyed it. In particular, I thought the Inland Empire skill was pretty awesome. I can't imagine what the game would be like without the ability to talk to inanimate objects.


I took no Inland Empire but I did love Encyclopedia. Guess I'll have to give it another run....


I loved the way that you'd pass encyclopedia checks in the game and it would give you totally irrelevant information. I think at one point a character tells you she's using a contact mic and the skill check informs you about a boxer called "Contact Mike".


Or in other cases, completely relevant information, like the complete life history and works of Doloros Dei, which is simply unactionable as she has been dead for decades and the church is long-abandoned too.


Contact Mike is one of the most important encyclopedia checks in the game, honestly.

Its one of the few direct hints you get to HDB's backstory before the literal last scene, which can actually give you actionable insight in a couple scenes IIRC.


i miss my crazy necktie


who tells you to arrest people and take drugs. some friend that is.


This is super impressive! It's a cool POC, although it is already clear that it would be feasible for Apple to put M.2 slots in Macbooks if they wanted to.

I wonder how much it would cost to have someone replace the BGA NAND chips in my Macbook. Apple charges $6-800 for a 2TB upgrade for a Macbook (depending on whether it's a 250 or 500GB drive originally). Someone would have to be able to do it for like $2-300 for it to be a feasible upgrade, especially considering that my warranty would be void. I assume there are people overseas who could do it cheaply. I assume it would be fairly quick for someone who knows what they are doing.


There are already people doing M1 (and newer) flash upgrades by de-soldering the old flash chips and putting in new ones. I don't know the price, but potentially this kit could take out some of the steps and special hardware (involving USB flash adapters and a separate PC) required to do the job (by have the removable part already configured), and while it won't reduce the difficulty in removing the old chips without damaging the motherboard, potentially it could be easier to solder in place than the new flash chips (although I'd call that a big maybe). I'll be waiting until I see someone else using the kit before getting to excited though.

What I'm immediately looking forward to is someone making after market flash modules for the M4 mini, which uses a proprietary card format similar (but not compatible) with the Mac Studio (which does have after market cards available now).


Consider that by the time you’ve ran out of storage you may have also ran out of your warranty.


M.2 slots? M.2 SSDs have drive controllers on them, but in the Apple Silicon world, those controllers are integrated in the SoC.

They should absolutely not reuse the slot type for incompatible products.

They should make one though, their storage offerings are lame.


AI adoption can be increasing even if progress of the base technology has slowed down. LLMs as a technology are very new, and there are doubtless tons of interesting uses we have yet to discover.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: