Hacker News new | past | comments | ask | show | jobs | submit | jpc0's comments login

> Yes but your project won't be sustainable without funding

What project? Maybe the ML model OP has might be a decent amount of work. OP clearly has no project, just a collection of tools they could conceivably use in a project. People don't pay for code, they pay for solutions to problems.

If OP wants to keep the code to themselves then go ahead, your IP nobody cares. If OP wants to build an actual project to monetize the code that way then go for it. But the industry long ago decided that random code snippets that haven't been integer into a project just isn't worth the overhead of monetizing. Open source it or keep it to yourself.


You are pretty confused about why this is.

When the only market you ever had was high touch high cost low volume production then that is your default business model.

The biggest issue is that Trump is pushing tariffs without first ramping up local manufacturing, the type of manufacturing you are looking for isn't _currently_ being catered for in the US. It may in the future depending on how things pan out, the bet Trump is making is that it can happen, time will tell whether he is true.

I don't think it will generate jobs for local US manufacturing since the only way to compete with low cost of labour markets is to automate more than the low cost of labour country.

Business is reasonably good at filling whatever niche is willing to pay. So far the evidence is that Trump is willing to over commit and then backtrack. Having a negative outlook doesn't help anyone, think positive about your country and shift with the times.


> think positive about your country and shift with the times

You know I tried to think positively about the United States; but darned if they don't keep doing negative things. Like appointing grossly incompetent people to head Federal departments. Like unlawfully and arbitrarily abducting people from the streets. Like extorting universities - ideally centres of free thought - over non-complying ideological positions. Like appearing to wreck the economy; but in ways that might just advantage himself and others in his circle. And the list goes on...

Some of us aren't "shifting with the times" because of an ethical line we won't cross. I grew up in the United States in the 1960's and had the constant drumbeat of "We're the world's melting pot," "We're the most benevolent spreader of democracy," "We're practically the only free country on the planet," "We are a country of laws." beat into us in public school. So it's a little jarring to see the wholesale abandonment of these values at the hands of someone who can barely string together a cogent sentence of more than, say, 4-5 non-repeating words and for whom "negotiating" means "win/lose", instead of "how can we meet our needs _and_ your needs, while creating more value in the process?"

Personally, I tried having a positive outlook; but saw this coming and left the U.S. just ahead of Trump 1.0.

This rant aside, it's incredibly wishful thinking to assume that one can undo in weeks or months, the complex web of international trade that has developed over decades because of the much-vaunted invisible hand of the market.


> think positive about your country

Like insisting the United States is 'rigged, crooked and evil'?

Trump insists the United States is 'rigged, crooked and evil':

https://www.msnbc.com/rachel-maddow-show/maddowblog/trump-in...

>“The Witch Hunt continues, and after 6 years and millions of pages of documents, they’ve got nothing. If I had what Hunter and Joe had, it would be the Electric Chair. Our Country is Rigged, Crooked, and Evil — We must bring it back, and FAST. Next stop, Communism!”

So do you have any shred of evidence he's backtracking on all the racism and misogyny and homophobia and transphobia and cruelty and corruption he overcommitted on?


How you implement algorithms and data structures in C++/rust is semantics at best. The imperative shell of those languages are identical semantically right down to the memory model.

Right, that's why a 20 year old book on algorithms and data structures is not necessarily outdated, but a 20 year old book on C/C++ most certainly is.

My copy of The C++ Programming Language for C++98 is still useful today, as is copy of The C Programming Language for C89. The idea that these books are no longer useful is absurd. Modern compilers still support those versions and the new versions are largely extensions of the old (although C++11 changed some standard library definitions). The only way you could think this is if you have zero knowledge of these languages.

> The only way you could think this is if you have zero knowledge of these languages.

Exactly. For context see my original comment above about C/C++ books being paid.


Are you unable to use search engines:

https://www.learn-c.org/

There are so many free learning resources for these languages that it is ridiculous to say that you need books to learn them. The books are great, but non-essential. If you insist on reading books, there is an ancient invention called a library that you can use for free.


What C standard does that website describe?

At a glance, the code is compatible with all C standards ever published. You are too fixated on learning the latest. The latest is largely just a superset of the earlier standards and the new features are often obscure things you are unlikely to need or want to use.

The main exceptions that you actually would want to use would be the ability to declare variables anywhere in a block and use single line // comments from C++ in C99. Most C programmers do not even know about most of the things added in newer C standards beyond those and perhaps a handful of others.


I did more research and found https://isocpp.org/get-started which appears to be the authority for C++. It states that I will need a textbook for learning C++ and includes a link to Bjarne Stroustrup "Tour of C++" Amazon page (not buying it lol). For C the situation is more complicated because there appears to be multiple organizations making standards for it, and you have to pay "CHF 221.00" to even see the standard. It kind of reminds me of the web where there is also multiple consortiums making standards and browsers hopefully implement them (except the web standards are free). In conclusion I much prefer Rust where you can just read the docs without bullshit.

Almost nobody using C (or C++ for that matter) has read the standard. The standard exists for compiler developers. If you want to read the standard, you can get copies of the drafts for free. It is an open secret that the final draft is basically verbatim with the standard that costs money. However, it is very rare that a C programmer needs to read the standard.

As for C++, there is nothing at that link that says you need a textbook to learn C++ (and the idea that you need one is ludicrous). The textbooks are suggested resources. There are plenty of free resources available online that are just as good for learning C++.

You would be better off learning C before learning C++. C++ is a huge language and its common history with C means that if you do not understand C, you are likely going to be lost with C++. If you insist on learning C++ first, here is the first search result from DuckDuckGo when I search for "learn C++":

https://www.learncpp.com/

You will likely find many more.

For what it is worth, when I was young and wanted to learn C++, I had someone else tell me to learn C first. I had not intended to follow his advice, but I decided to learn C++ by taking a university class on the subject and the CS department had wisely made learning C a prerequisite for learning C++. I later learned that they had been right to do that.

After learning C++, I went through a phase where I thought C++ was the best thing ever (much like how you treat Rust). I have since changed my mind. C is far better than C++ (less is more). I am immensely proud of the C code that I have written during my life while I profoundly regret most of the C++ code that I have written. A particular startup company that I helped get off the ground after college runs their infrastructure on top of a daemon that I wrote in C++. Development of that daemon had been a disaster, with C++ features making it much harder to develop than it actually needed to be. This had been compounded by my "use all of the features" mentality, when in reality, what the software needed was a subset of features and using more language features just because I could was a mistake.

I had only been with the startup for a short time, but rejoined them as a consultant a few years ago. When I did, I saw that some fairly fundamental bugs in how operating system features were used from early development had gone unresolved for years. So much of development had been spent fighting the compiler to use various exotic language features correctly that actual bugs that were not the language's fault had gone unnoticed.

My successor had noticed that there were bugs when things had gone wrong, but put band-aids in place instead of properly fixing the bugs. For example, he used a cron job to restart the daemon at midnight instead of adding a missing `freeaddrinfo()` call and calling `accept()` until EAGAIN is received before blocking in `sigwaitinfo()`. Apparently, ~3000 lines of C++ code, using nearly every feature my younger self had known C++ to have, were too complicated for others to debug.

One of the first things I did when I returned was write a few dozen patches fixing the issues (both real ones and cosmetic ones like compiler warnings). As far as we know, the daemon is now bug free. However, I deeply regret not writing it in C in the first place. Had I written it in C, I would have spent less time fighting with the language and more time identifying mistakes I made in how to do UNIX programming. Others would have been been more likely to understand it in order to do proper fixes for bugs that my younger self had missed too.


> As for C++, there is nothing at that link that says you need a textbook to learn C++.

Sorry, it says that in their FAQ[0]. It also says "Should I learn C before I learn C++?" "Don’t bother." and proceeds to advertise a Stroustrup book[1].

[0]: https://isocpp.org/wiki/faq/how-to-learn-cpp#start-learning

[1]: https://isocpp.org/wiki/faq/how-to-learn-cpp#learning-c-not-...

> If you insist on learning C++ first, here is the first search result from DuckDuckGo when I search for "learn C++":

I don't insist on learning C++ and I even agree with you that C is better. But I have a problem with learning from non-authoritative sources, especially random websites and YouTube tutorials. I like to learn from official documentation. For C there appears to be no official documentation, and my intution tells me that, as nickpsecurity mentioned, the best way is to read the K&R book. But that brings us back to my original point that you have to buy a book.

> was the one true way (like you seem to have been with Rust)

I don't think there exists any one true way. It depends on what you do. For example I like Rust but I never really use it. I pretty much only use TypeScript.

> was the best thing ever (much like how you treat Rust)

I would actually prefer Zig over Rust but the former lacks a mature ecosystem.

> For example, they used a cron job to restart the daemon at midnight instead of adding a missing `freeaddrinfo()` call and calling `accept()` until EAGAIN is received before blocking in `sigwaitinfo()`.

This sounds like a kind of bug that would never happen in Rust because a library would handle that for you. You should be able to just use a networking library in C as well but for some reason C/C++ developers like to go as far as even implementing HTTP themselves.

> After learning C++...

Thanks for sharing your story. It's wholesome and I enjoyed reading.


> Sorry, it says that in their FAQ[0]. It also says "Should I learn C before I learn C++?" "Don’t bother." and proceeds to advertise a Stroustrup book[1].

They also would say "Don't bother" about using any other language. If you listen to them, you would never touch Rust or anything else.

> But I have a problem with learning from non-authoritative sources, especially random websites and YouTube tutorials. I like to learn from official documentation. For C there appears to be no official documentation, and my intution tells me that, as nickpsecurity mentioned, the best way is to read the K&R book. But that brings us back to my original point that you have to buy a book.

The K&R book is a great resource, although I learned C by taking a class where the textbook was "A Book On C". I later read the K&R book, although I found "A Book on C" to be quite good. My vague recollection (without pulling out my copies to review them) is that A Book On C was more instructional while the K&R book was more of a technical reference. If you do a search for "The C programming language", you might find a PDF of it on a famous archival website. Note that the K&R book refers to "The C programming language" by Kerninghan and Ritchie.

Relying on "authoritative" sources by only learning from the language authors is limiting, since they are not going to tell you the problems that the language has that everyone else who has used the language has encountered. It is better to learn programming languages from the community, who will give you a range of opinions and avoid presenting a distorted view of things.

There are different kinds of authoritative sources. The language authors are one, compiler authors are another (although this group does not teach), engineers who actually have used the language to develop production software (such as myself) would be a third and educational institutions would be a fourth. If you are curious about my background, I am ths ryao listed here:

https://github.com/openzfs/zfs/graphs/contributors

You could go to edx.org and audit courses from world renowned institutions for free. I will do you a favor by looking through what they have and making some recommendations. For C, there really is only 1 option on edX, which is from Dartmouth. Dartmouth is a world renowned university, so it should be an excellent teacher as far as learning C is concerned. They appear to have broken a two semester sequence into 7 online courses (lucky you, I only got 1 semester at my university; there was another class on advanced UNIX programming in C, but they did not offer it the entire time I was in college). Here is what you want to take to learn C:

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

https://www.edx.org/learn/linux/dartmouth-college-linux-basi...

https://www.edx.org/learn/c-programming/dartmouth-college-c-...

There is technically a certificate you can get for completing all of this if you pay, but if you just want to learn without getting anything to show for it, you can audit the courses for free.

As for C++, there are two main options on edX. One is IBM and the other is Codio. IBM is a well known titan of industry, although I had no idea that they had an education arm. On the other hand, I have never heard of Codio. Here is the IBM sequence (note that the ++ part of C++ is omitted from the URLs):

https://www.edx.org/learn/c-programming/ibm-fundamentals-of-...

https://www.edx.org/learn/object-oriented-programming/ibm-ob...

https://www.edx.org/learn/data-structures/ibm-data-structure...

There actually are two more options on edX for C++, which are courses by Peking University and ProjectUniversity. Peking University is a world class university in China, but they only offer 1 course on edx that is 4 weeks long, so I doubt you would learn very much from it. On the other hand, I have never heard of ProjectUniversity, and their sole course on C++ is only 8 weeks long, which is not very long either. The 3 IBM courses together are 5 months long, which is what you really want.

> I pretty much only use TypeScript.

Learn C, POSIX shell scripting (or bash) and 1 functional programming language (Haskell is a popular choice). You will probably never use the functional programming language, but knowing about functional programming concepts will make you a better programmer.

> This sounds like a kind of bug that would never happen in Rust because a library would handle that for you. You should be able to just use a networking library in C as well but for some reason C/C++ developers like to go as far as even implementing HTTP themselves.

First, I was using a networking library. The C standard library on POSIX platforms is a networking library thanks to its inclusion of the Berkeley sockets API. Second, mistakes are easy to criticize in hindsight with "just use a library", but in reality, even if you use a library, you could still make a mistake, just as I did here. This code also did much more than what my description of the bug suggested. The reason for using asynchronous I/O is to be able to respond to events other than just network I/O, such as SIGUSR1. Had I not been doing that, it would not have had that bug, but it needed to respond to other things than just I/O on a socket.

I described the general idea to Grok and it produced a beautiful implementation of this in Rust using the tokio "crate". The result had the same bug that the C++ code had, because it made the understandable assumption my younger self made that 1 SIGIO = 1 connection, but that is wrong. If two connection attempts are made simultaneously from the perspective of the software, you only get 1 SIGIO. Thus, you need to call accept() repeatedly to drain the backlog before returning to listening for signals.

This logical error is not something even a wrapper library would prevent, although a wrapper library might have prevented the memory leak, but what library would I have used? Any library that wraps this would be a very thin wrapper and the use of an additional dependency that might not be usable on then future systems is a problem in itself. Qt has had two major version changes since I wrote this code. If I had used Qt 4's network library, this could have had to be rewritten twice in order to continue running on future systems. This code has been deployed on multiple systems since 2011 and it has never once needed a rewrite to work on a new system.

Finally, it is far more natural for C developers and C++ developers to use a binary format over network sockets (like I did) than HTTP. Libcurl is available when people need to use HTTP (and a wide variety of other protocols). Interestingly, an early version of my code had used libcurl for sending emails, but it was removed by my successor in favor of telling a PHP script to send the emails over a network socket (using a binary format).


> Thus, you need to call accept() repeatedly to drain the backlog before returning to listening for signals.

It's not just accept. If your socket is non-blocking the same applies to read, write, and everything else. You keep syscalling until it returns EAGAIN.

> I described the general idea to Grok and it produced a beautiful implementation of this in Rust using the tokio "crate". The result had the same bug that the C++ code had, because it made the understandable assumption my younger self made that 1 SIGIO = 1 connection, but that is wrong.

I don't know what your general idea was but tokio uses epoll under the hood (correctly), so what you are describing could only have happened if you specifically instructed grok to use SIGIO.

> Finally, it is far more natural for C developers and C++ developers to use a binary format over network sockets (like I did) than HTTP.

Designing a custom protocol is way more work than just using HTTP. <insert reasons why http + json is so popular (everyone is familiar with it blah blah blah)>.


> It's not just accept. If your socket is non-blocking the same applies to read, write, and everything else. You keep syscalling until it returns EAGAIN.

You do not call read/write on a socket that is listening for connections.

> I don't know what your general idea was but tokio uses epoll under the hood (correctly), so what you are describing could only have happened if you specifically instructed grok to use SIGIO.

That is correct. There is no other way to handle SIGUSR1 in a sane way if you are not using SIGIO. At least, there was no other way until signalfd was invented, but that is not cross platform. epoll isn't either.

> Designing a custom protocol is way more work than just using HTTP. <insert reasons why http + json is so popular (everyone is familiar with it blah blah blah)>.

You are wrong about that. The code is just sending packed structures back and forth. HTTP would overcomplicate this, since you would need to implement code to go from binary to ASCII and ASCII to binary on both ends, while just sending the packed structures avoids that entirely. The only special handling this needs is to have functions that translate the structures from host byte order into network byte order and back, to ensure that endianness is not an issue should there ever be an opposite endian machine at one end of the connection, but those were trivial to write.

Do yourself a favor and stop responding. You have no idea what you are saying and it is immensely evident to anyone who has a clue about software engineering.


> You are wrong about that. The code is just sending packed structures back and forth.

Among other things, this would only work if your client is written in a language that supports C structures.

> Do yourself a favor and stop responding. You have no idea what you are saying and it is immensely evident to anyone who has a clue about software engineering.

Says the one who didn't know how to use non-blocking sockets.

> That is correct. There is no other way to handle SIGUSR1 in a sane way if you are not using SIGIO. At least, there was no other way until signalfd was invented, but that is not cross platform. epoll isn't either.

```

use std::io;

use tokio::{

    net::UnixListener,

    select,

    signal::unix::{SignalKind, signal},
};

#[tokio::main(flavor = "current_thread")]

async fn main() -> io::Result<()> {

    let mut signal = signal(SignalKind::user_defined1())?;

    let listener = UnixListener::bind("./hi")?;

    loop {
        select! {
            _ = signal.recv() => {
                todo!();
            }

            _ = listener.accept() => {
                todo!();
            }
        }
    }
}

```


> Among other things, this would only work if your client is written in a language that supports C structures.

Such languages are used at both ends. Otherwise, this would not have been working in production for ~13 years.

> Says the one who didn't know how to use non-blocking sockets.

Neither did you until you learned it. If you successfully begin a career in software engineering and years later, have the humility to admit the mistakes you made when starting out for the benefit of others, you will deserve to have an incompetent know it all deride you for having been so kind to admit them, just like you are doing to me here.

Anyone with a modicum of programming knowledge can write code snippets free from mistakes immediately after being told about the mistakes that would be made when given a simplified explanation of one thing that is done in production software. The problem with software engineering is that nobody tells you everything you can do wrong before you do it, and you are not writing code snippets, but production software.


> If you successfully begin a career in software engineering and years later, have the humility to admit the mistakes you made when starting out for the benefit of others, you will deserve to have an incompetent know it all deride you for having been so kind to admit them, just like you are doing to me here.

I wouldn't "deride" you if you weren't acting arrogant and calling me incompetent or whatever.

> The problem with software engineering is that nobody tells you everything you can do wrong before you do it

Maybe, but the docs definitely tell you about EAGAIN and freeing memory after you're done using it. In Rust many kinds of logical errors you could potentially have made are eliminated by the type system though. For example I wrote a news scraper in Rust and ran it locally a couple times to see that it works, and it's been running for half a year now on a VPS and I never had to touch it or restart anything.


> You could go to edx.org and audit courses...

This is great advice, thanks!


Auto as it is now has been in C++ since C++11, thats more than a decade ago...

If your argument was C then sure thats a C23 feature (well the type inference type of auto ) and is reasonably new.

This is much more a reflection on youe professor than the language. C++11 was a fundamental change to the language, anyone teaching or using C++ in 2025 should have an understanding of how to to program well in a 14 year old version of said language...


> Auto as it is now has been in C++ since C++11, thats more than a decade ago...

> anyone teaching or using C++ in 2025 should have an understanding of how to to program well in a 14 year old version of said language...

If the current year is 2025 then 14 years ago is 2011 which is not that long ago.

> If your argument was C then sure thats a C23 feature (well the type inference type of auto ) and is reasonably new.

Grandparent comment is arguing that Linux was written in C89 until a few days ago so decades old books on C aren't actually outdated.


Decades ols books in C most certainly is even useful in modern C++23 because you need to interact with other libraries written in C89.

When a lot of modern CS concepts wwre first discovered and studied in the 70s, there's no point arguing that old books are useless. Honestly there may be sections of old books that are useless but in the whole they are still useful.


We're talking about learning C/C++ from scratch which makes no sense to do by using a decades old book because it wouldn't teach you any modern features. Also we're not talking about computer science.

You do not need to know about modern features to write code in C. This is part of computer science.

> You do not need to know about modern features to write code in C.

Then what’s the point of adding any new features?


Some people want to use them, they are useful in some contexts and they often already exist in some form elsewhere, but the majority of people often do not need them.

That said, when you learn a foreign language, you do not learn every word in the dictionary and every grammatical structure. The same is true for programming. You just don't need to have a more up to date book than one on C89 to begin learning C.


> Preferably, you should use a single <h1> per page—this is the top level heading, and all others sit below this in the hierarchy

From the MDN docs on headings and paragraphs [0].

Yet this article is unclearly stating that it isn't preferred but required seeing as the places it semantically makes sense to use multiple H1 tags in a page will now log warnings to developers ( article, aside, nav etc .. )

The article mentions confusion yet the defacto documentation on the web encourages the confusion by not being more specific...

0. https://developer.mozilla.org/en-US/docs/Learn_web_developme...


It's a weird one. HTML does have a TITLE tag. But it's supposed to go in the HEAD. The BODY does not have a TITLE. Any word processor uses something called a Title (and Subtitle) at the top of your document and then things Heading1 for sections. HTML originally didn't have dedicated tags for this; just h1-h6 (because surely six is enough for anyone). So you get this weird off by one error that arises from the notion that HTML just lacks essential tags. So you use H2 to mean Heading1. Because H1 is reserved for the title. Never mind about sub titles. Not a thing in HTML.

Mostly this is because Tim Berners Lee probably didn't think this one through properly decades ago. And it was never really fixed. These days you can just invent your own tag names and style them of course. Which is a useful trick that is a bit underused. The structural semantics are nice for things like accessibility, SEO, and a few other things but otherwise HTML is a really poor choice of a format to exchange structural information. You generate it from other formats preferably. Writing it manually is a PITA. Even if you are a developer. Things like Markdown exist for a reason (and perpetuate the problem).


I don't think it's fair to blame Tim Berners Lee for this. The WWW was supposed to serve documents. The TITLE would have been rendered by your browser, and in fact, it still kind of is, in the window title bar.

The Web has long departed from that vision however; very few pages, if any, could still be considered documents.


I think it is fair to say TBL didn't come up with a perfect design, and he never set out to. He solved a simpler problem than the ones we have now, and that's just fine. He might be the last person to imagine HTML would have continued on with so much of its original design intact.

The article says

> Do not rely on default browser styles for conveying a heading hierarchy. Explicitly define your document hierarchy using <h2> for second-level headings, <h3> for third-level, etc.


The argument I was making was what is why would

    <main>
    
    <h1>Main Heading</h1>

    section..h2... Etc

    </main>
    
    <aside>
    
    <h1>Aside heading</h1>

    section...h2...

    </aside>
Be incorrect? The original html standard clearly stated this was acceptable but now there should only be a single H1 which is the page heading and all other headings should be H2 and lower. What if the page content doesn't actually have a single main heading? This change fundamentally changes the semantics of something which has had unclear semantics for decades and which actually rendered what I typed above correctly in the past. Now it would not be rendered correctly anymore.

The spec allows multiple top-level headings: https://html.spec.whatwg.org/multipage/sections.html#heading...

> Instruct the browser to re-load the page upon navigating back (cacheability headers), identify the order using an ID in the URL, then when reloading detect its already-submitted state on the server

And how would one do that without using JS?


Which part exactly?

Re-loading the page on navigating back would be done using cacheability headers. This is the most shaky part, and I'm not sure if it is possible today. If this does indeed not work, then this would be one of the "things that Javascript has solved that the non-JS web is still stuck with" I meantioned in my other post, i.e. one of the reasons that JS is more popular than non-JS pages today.

Identifying the order using an ID in the URL is standard practice everywhere.

When the order page gets requested, the server would take that ID, look the order up in the database and see that it is already submitted, responding with a page that has its submit button disabled.


The must have js part for me starts where one can open the store in multiple tabs then add and remove things on the first two and check out on the 3rd tab.

For many places where apps would previously be accepted users don't want native applications anymore.

For work other than some very industry specific high performance software most businesses software is web based, and users ( those paying the bills anyways ) want them to be web based because it is much more portable and easy to deploy.


> Why should I hire you if an agent can do it ?

You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?

Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.

As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?

There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.


I fully agree with you about liability. I was advocating for the other point of view.

Some people argue that it doesn’t matter if there is mistakes (it depends which actually) and with time it will cost nothing.

I argue that if we give up learning and let LLM do the assignments then what is the extent of my knowledge and value to be hired in the first place ?

We hired a developper and he did everything with chatGPT, all the code and documentation he wrote. First it was all bad because from the infinity of answers chatGPT is not pinpointing the best in every case. But does he have enough knowledge to understand what he did was bad ? And then we need people with experience that confronted themselves with hard problems and found their way out. How can we confront and critic an LLM answer otherwise ?

I feel student’s value is diluted to be at the mercy of companies providing the LLM and we might loose some critical knowledge / critical thinking in the process from the students.


I agree entirely on your take regarding education. I feel like there is a place where LLMs are useful but doesn't impact learning but it's definitely not in the "discovery" phase of learning.

However I really don't need to implement some weird algorithms myself every time (ideally I am using a well tested Library) but the point is that you learn to be able to but also to be able to modify or compose the algorithm in ways the LLM couldn't easily do.


Why did you hire someone who produced bad code and docs? Did he manage to pass interview without an AI?

100%, I have family in manufacturing and this isn't anything new. Most current manufacturing plants already run on effectively a skeleton staff vs 50 years ago.

yes, they do, that is true, however that's with [some]-axis stationary robots. Not humanoid robots literally running around. The best we can do right now afaik is that robot-dog-like thing which can overcome obstacles and be equipped with sensors. Nothing human like.

If I imagine I run into a factory full of "thinking" (current LLM level top of line benchmark) humanoid looking robots who are collaborating on tasks dynamically as needed... In my book that is as dystopian as it gets and has nothing to do with the current level of automation that's happening, that's a whole new level.


Safety standards in terms of programming and logic an OSHA are going to have to change a lot before that happens:

https://www.bbc.com/news/world-europe-62286017


100% still search first. If I am not super knowledge on the domain I am searching for I use an AI to get me keywords and terminology and then search.

At most I use AI now to speed up my research phase dramatically. AI is also pretty good at showing what is in the ballpark for more popular tools.

However I am missing forum style communities more and more, sometimes I don't want the correct answer, I want to know what someone that has been in the trenches for 10 years has to say, for my day job I can just make a phone call but for hobbies, side projects etc I don't have the contacts built up and I don't always have local interest groups that I can tap for knowledge.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: