Hacker News new | past | comments | ask | show | jobs | submit | blahgeek's comments login

I think it's more common because one doing only coding can get paid reasonably. On the contrary, few people who "bake cakes for fun, make music for fun, write poems, novels, play chess for fun, practice sports, grow potatos" can get paid enough for a living, so that's usually not an option to consider. (Which is the reason that I find us coding people very lucky.)

> only coding can get paid reasonably

If you happen to work for a company that's big enough to pay reasonably. And even that is still a very temporary accident of times.

There was a time with plenty (comparatively to today) of tailors, living very reasonably, because there was a demand, and the means.

Today, you're lucky if you manage to find one that's in your city, and even more if he/she's not too expensive (that is, compared to ready-made stuff).


Like you almost spelled out, tailors were never competing with ready-made. Clothing used to be expensive, until people (sometimes children) working for pennies were able to send to you across long distances something good enough to wear.

Come on, coding is universally at a premium compared to other trades. Naturally you wouldn't have a FAANG salary at an outsourcing farm overseas but it'll certainly provide you with comfortable living by local standards.

> coding is universally at a premium compared to other trades

It has been and it still is at this time. Just saying that it won't last.

The existential threat, and perpetual adaptation to technology musicians (classical as well as contemporary) have met since the invention of sound recording and its developments, is coming for software developers too.


> perpetual adaptation to technology musicians have met

Didn't it also went with an important reduction in the number of people who could make a living out of that?


Precisely.

I'm not sure that is actually true about tailors. My understanding is that most clothing was homemade. I assume people didnt generally make their own shoes but they made their own textiles and basic garments and most people didnt have many garments.

Maybe there is a specific time period you are referring to where this was common but as I understand it, pre-industrially there were very few artisans selling products for money. Clothes were made largely by women and girls for their families.


Presumably he is referring to the industrialization period when suits were the everyday fashion. Once we moved on to baggy jeans and sweatpants, where the fit doesn't matter much, then the tailor was no longer relevant.

Yes, I'm referring to what we could call the golden age of tailoring, around 1800-1970.

You could say it was brief, relative to humanity history, indeed, as a transition period between cottage/home textile manufacturing as well as sewing, and high (and accelerating) automation managed by fewer people and lots of low-paid workers (as it is today).

And such is the trajectory for software development, a brief golden age, between the moment where computers barely existed, and the moment where automation/acceleration takes over.

It won't eliminate software development, but it won't require as many people as it does today. Some "local" artisan shops, highly skilled, and more expensive, may still exist.

But the capital currently feeling high tech salaries will inevitably seek new/other growth opportunities, as it has always done with other growth drivers.


Be that as it may, there definitely used to be more tailors.

I code at work so they can give me money so I can buy the stuff I need to carry on living. I have very generous employers who pay me a lot more money than I need to live on. The code I write at work is not very creative code - I contribute bugfixes and incremental improvements; I advocate for better accessibility of our products; I spend time code reviewing for colleagues. Standard work.

When the working day ends I switch from my work laptop to my personal laptop and start doing fun stuff: creative coding; curious-itch-scratch coding; etc. I'll also spend time doing the other fun stuff like writing poems, inventing languages, drawing maps, watching reaction videos - there's all that family and friend stuff too which can (often) be fun.

It's a choice: "live-to-work", or "work-to-live". I choose living. Recently my employers had a promotion round (the first in a few years) and I told my manager not to put my name forward for consideration. I'm comfortable at my current level and don't need the worries and stresses that come with increased work responsibilities - that would just get in the way of the fun stuff!


so you are saying work is meant to be miserable and coding is the exception. Is that a life worth living?

It's not that work is meant to be miserable, it's that if work wasn't in some ways miserable/frustrating/unrewarding/etc, more people would be doing it for free.

Or rather, you wouldn't need to pay people to do things they already enjoy doing. So, the things you need to pay people to do must contain some things that people don't want to do for free.


> What I mean by doing research is not just Google it but also asking questions, calling people and using other resources.

> It’s obvious that currently none of the SOTA models can do such tasks, agentic or not. And therefore they are NOT AGI to me.

I myself almost never do that (calling people when googling is possible). Guess I'm not general intelligence. :)


> An often overlooked feature of the Gemini models is that they can write and execute Python code directly via their API.

Could you elaborate? I thought function calling is a common feature among models from different providers


The Gemini API runs the Python code for you as part of your single API call, without you having to handle the tool call request yourself.

This is so much cheaper than re-prompting each tool use.

I wish this was extended to things like: you could give the model an API endpoint that it can call to execute JS code, and the only requirement is that your API has to respond within 5 seconds (maybe less actually).

I wonder if this is what OpenAI is planning to do in the upcoming API update to support tools in o3.


I imagine there wouldn’t bd much of a cost to the provider on the API call there so much longer times may be possible. It’s not like this would hold up the LLM in any way, execution would get suspended while the call is made and the TPU/GPU will serve another request.

They need to keep KV cache to avoid prompt reprocessing, so they would need to move it to ram/nvme during longer api calls to use gpu for another request

This common feature requires the user of the API to implement the tool, in this case, the user is responsible to run the code the API outputs. The post you replied suggests that Gemini will run the code for the user behind the API call.

That was how I read it as well, as if it had a built-in lambda type service in the cloud.

If we're just talking about some API support to call python scripts, that's pretty basic to wire up with any model that supports tool use.


The title seems to suggest it's a bug of the linux kernel, but it's not. The layout of an internal struct is never part of the user-facing API of kernel, so it's bound to be changing. eBPF programs are expected to be compiled against the exact kernel config that is going to be executed, just like any kernel modules.


Author here, yep, I agree, not a bug, just what you get if you start accessing raw stuff instead of stable interfaces and underlying things change (but in this case it was necessary). I mentioned this in the summary in the end.

My understanding is that with BTF + CO-RE, you'll have the flexibility of building your program binary once and it will work on other (BTF+CORE capable) kernel versions without without needing a recompile. But since I had to use lower level methods for stack & pt_regs access, I had to manually add logic for checking kernel structural differences at runtime.

That being said, I have not yet gotten to test if a compiled xcapture binary can just be copied between machines (of the same architecture) and whether it really works as advertised...


How would you recommend learning about eBPF / BTF / CO-RE? I have a basic understanding of what they are, but am not sure where to start for writing eBPF tracing programs, setting them up in production w/ Grafana, etc.


For getting started with eBPF performance mindset, I normally recommend Brendan Gregg's book just to see what's possible:

- https://www.brendangregg.com/bpf-performance-tools-book.html

And as a related activity, you could just install the bcc-tools package (on RHEL clones) and check out the /usr/share/bcc/tools directory to see what's already implemented (on latest Ubuntu, these tools seem to be installed in /usr/sbin, etc, but you could "find /usr -name *bpfcc" to get a list of eBPF tools already installed there (and test/man some more interesting ones).

For the bigger picture and other eBPF uses like networking, I'd get Liz Rice's eBPF book (free download):

- https://isovalent.com/books/learning-ebpf/

But the most valuable resource for me when I took the leap from writing bpftrace one-liners to more sophisticated modern eBPF programs was (and still is) Andrii Nakryiko's blog with examples of modern BPF programming:

- https://nakryiko.com/


> but you could "find /usr -name bpfcc"*

Or ask your package manager. Debian (Ubuntu): `dpkg -L bpfcc-tools`, RedHat: `rpm -ql bcc-tools`


Even CO-RE won't help long term. Sure, it'll adjust structure offsets and such for you, but it can't deal with conceptual changes. We need an explicit contractual stable API, at the BPF helper level, not a quivering mound of hacks trying to make unstable APIs magically stable without the work needed to actually stabilize them.


> Isn't there basically Google/Waymo and then, seemingly much further behind, Tesla Cybertaxi, Amazon/Zoox, and Uber/Yandex?

Global-wise, there are also a few Chinese companies in the robotaxi market. Pony.ai and WeRide both recently went in public market.


And I think some in Germany as well.

Not yet with customers, but in the testing phase.


Reading half way through the post, I thought what the author going to do was to analyze the distribution of numbers in the filename, and, I don't know, maybe give an estimation about how often people take photos or videos, based on the time, country, etc. That would be an interesting study.


ironically, im glad this post wasn't about that. that would've been too typical of HN imo. why talk about metadata when you can just enjoy the data itself??


The logic is sound. Storage can be end-to-end encrypted, but it cannot be used for computation.


I'm not impressed. I can totally imagine single person building the app under 5 months. Well, of course, it will not have the best UI or recommendation algorithm or marketing stuff, but I don't think those are in the critical path on the building of the app. I expected that a company like Meta can build apps like this in weeks.

It got me thinking that, in terms of building software 0 to 1, maybe the scalability of head count is surprisingly low.


> I expected that a company like Meta can build apps like this in weeks.

Why not a day?


That “python float” example is a hall of shame for Google, not python. I searched the same term using kagi.com and first results are exactly what the author expects.

Link: https://kagi.com/search?q=python+float&r=us&sh=cDUw0QAZf3DUs...


Even searching 'float' on docs.python.org brings descent results.

At first the author complains about bad search implementations on the documentation sites and then judges Python by the bad google results.


Python stdlib docs are totally fine, too.


> Yet, if you want to support vscode you must create a specific extension for it and can't just have a language client.

It’s funny that you generally don’t need to do that for editors like vim or emacs - only need to add a single line in config to specify the command line argument


It depends - the client modules for Emacs' `lsp-mode` generally need a fair bit of configuration. Not just how to launch the server, but also often defining a native-compatible way of setting the options. e.g. for Emacs they often get wired up as `defcustom`s.


It honestly depends on how much your LSP server infests the editor. In the language server I wrote there are really 2 cases where we utilize the creation of an editor-specific extension

* dynamic registration of file extensions * display of svg generated by the language server.

We also had to write ad-hoc custom vim-script and would need to do the same for emacs for the first of those, and just dump a URL for vim to punt to a browser for the latter. But it isn't unrealistic to require a custom editor scripts for other editors besides vscode, in the sense that I've done so...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: