I think it's even more interesting that the required amount of energy to do that high computational work isn't that high. Evolution has been working on it for a long time, and some things are really inefficient but overall it does an OK job at making squishy machines.
I had a good chuckle at "squishy machines". That's a really interesting way to think about it. It makes me wonder if, some day, we will be able to build "squishy machines" of our own, capable of outperforming silicon while using a tiny fraction of the energy.
Yes. We need copyright... we need to rework it for digital stuff, however if you are an artist, writer, musician, programmer. You should have credit for the work you put out in some way no?
I keep being genuinely baffled as someone who writes and has some minor published fictional works about how extreme people can get with this. Why shouldn't I have protection for works I created? What gives someone else the right to just take my story, add tidbits, sell it and make profit?
You’ve grown up in a world where the idea that someone “owns” a song, a story, etc, is culturally normal. That doesn’t make it a law of nature. It’s just your culture.
I find the culture of “ownership” of ideas morally repugnant. That’s not culture I perpetuate or encourage.
I’m willing to tolerate some of in the form of limited time monopolies if it benefits humanity.
> I’m willing to tolerate some of in the form of limited time monopolies if it benefits humanity.
So in other words you are absolutely fine with the concept of the ownership of ideas, you just disagree on what the legal terms should be around it. Which is fine! I disagree with the current copyright regime too.
But I think having some form of (much more limited) copyright would benefit humanity much more than having no copyright at all.
>I find the culture of “ownership” of ideas morally repugnant
I see that we're starting off with a misrepresentation already. Copyright isn't a patent.
Copyright isn't solely ownership of 'ideas,' it's ownership of actual work. You cannot 'copyright' an idea.
> It’s just your culture.
I'm sure when it's the 'culture' of every single country in the world, it's an alright culture.
>I’m willing to tolerate
> That’s not culture I perpetuate or encourage.
See what's great about this is that your extremist ideas are in a minority, and you do not get to make the laws. So what you're willing to 'tolerate' or not is irrelevant. No one is looking to you to 'encourage' or 'perpetuate' the culture. Demonizing artists by going "You don't own your own work, I should be able to own it, I made this!" will only get you so far.
---
You shouldn't have free reign to profit off of artistic hard work of someone else without a way for them to benefit from that as well.
Just goes to show you that morals are relative, and have no absolute standing as to what's "okay" and what isn't.
I'm not the person you're replying to, but I can say I'm not angry about this at all (frankly, it feels like you're the one with the axe to grind here). I don't know about indoctrination; certainly I am a product of my environment, at the very least.
But I guarantee you that if copyright weren't a thing, the catalog of creative works in the world would be a tiny fraction of what it is today. And that would continue to be the case unless the human race can get to a point of post-scarcity, where we don't have to work to put food on the table or roofs over our heads, or have a decently nice standard of living.
Because the majority of people in the world who make things that fall under copyright would not be financially able to keep creating those things if they couldn't make money off of it. And sure, there are sometimes ways to make money off creative works without relying on copyright, but I don't think those ways cover enough to be meaningful.
>You seem pretty angry that somebody would think differently, too.
I'm not angry at all, I just don't see the world purely in black and white as you seem to do. On the other hand, you're the one throwing around words such as "indoctrinated" and all that.
I mean, you're the one who insinuated that it should be 100% okay for me to write a work with 100k words, and you taking those words, adding 200 more words, calling the story 'yours' and profiting off it. If THAT is what you believe 'natural law' is, then I'm absurdly glad that it does not work that way.
>I can’t conceive of looking at the world that way and being morally okay with it.
I can't conceive of having a morality that is basically "it should be okay to take/steal other people's works and do whatever you want with them, regardless of that person's feeling in that matter."
This is why we have open source licenses for people who WANT to grant others that ability to do so. You, as someone who has arguably been in the industry for so long, should be fairly familiar with it.
We live in a society with certain ideas of copyright floating around since we were kids. Copyright is a very nice thing for artists to have and I understand why people get defensive about arguments for taking copyright away from them. However, the presumption that you can have copyright on a story is a very modern take.
> Why shouldn't I have protection for works I created?
It's hard to argue about negatives. In my opinion, the real question is: why should copyright exist in the first place?
Copyright in its modern form has existed for what, 400 years? It's not exactly a requirement for a culture to develop. Obviously, you want protection; many people do.
There are answers to this question. For one, big companies wouldn't be spending billions on movies, tv, and music, if piracy was legal. The internet allowing instant copies of any work has also changed the situation significantly. On the other hand, piracy is easier than ever, there is way too much free online content to compete with to warrant the prices of a lot of media, and piracy is already effectively legal enough that typing "<Movie name> 2023 free stream" into Google will give you a variety of piracy websites to choose from.
On the other hand, most of the internet is very copyright-hostile. From meme templates to fanfiction and embedding foreign site content, most online communities have a very relaxed take on what copyright means. Imagine what would happen if the person who drew the original trollface were to go around demanding copyright fees and starting lawsuits for violating the rights to his property!
> What gives someone else the right to just take my story, add tidbits, sell it and make profit?
What gives you the right to prevent them from improving your work? If I don't like one of your character and consider the story better if I put in a character of my own, who are you to decide what I do with this idea or not?
I'm not anti-copyright (although I think the current form is absolutely ridiculous with its "70 years after the death of the author" terms) and I do enjoy the feeling of having control over my works, but I can't come up with a good rational reason why I have the right to tell someone else what they can or cannot do with the works I create.
We probably need some way to protect small creators (Patreon and friends are an excellent development!) but I would prefer to live in a world where https://www.youtube.com/watch?v=5GFW-eEWXlc can exist without the ever-present threat of lawyers.
>What gives you the right to prevent them from improving your work? If I don't like one of your character and consider the story better if I put in a character of my own, who are you to decide what I do with this idea or not?
If this were the case I'd just not publish, and keep my story with me, limited to a few people I trust. This is what you'll get if copyright wasn't there.
>On the other hand, most of the internet is very copyright-hostile. From meme templates to fanfiction and embedding foreign site content, most online communities have a very relaxed take on what copyright means. Imagine what would happen if the person who drew the original trollface were to go around demanding copyright fees and starting lawsuits for violating the rights to his property!
Most of the artistic internet doesn't contain works (outside of certain OSS communities) that may have taken someone years to complete and consists of hundreds of thousands of words. I'm amused that you should be somehow okay with the idea of taking a 300,000 words story, adding 10k words, modifying some bits, and selling it as "yours" and profiting off my work.
>Copyright in its modern form has existed for what, 400 years? It's not exactly a requirement for a culture to develop.
>In my opinion, the real question is: why should copyright exist in the first place?
I mean, you can just say that about modern forms of democracies and civil rights. Then why have them in the first place. We can have monarchies, autocracies, theocratic/thalassocratic republics just fine.
I see these posts popup every now and then. I admittedly don't use GPT4 or chatGPT that often, but I don't notice that much of a difference. Is it possible you try to give it harder and harder tasks and it is failing at them instead of the easier tasks it solved when you used it before? Is it possible it is just scaled back due to over use? Is that possible? It could be a dumb dumb question. In my experience even a few weeks ago, for Swift and Kotlin I found that the outputs of chatgpt and gpt4 are comparably similar and sometimes useless without a good amount of human intervention.
I don't feel that way personally... I have Nth level anxiety, maybe even N+1 level anxiety about losing my job to AI. Maybe in the future whoever comes next can enjoy things, but this literally keeps me up at night. With talks of extinction, job loss, etc. I feel like I wish I wasn't alive at this time.
Oh god so my background is CE/ECE stuff and you managed to trigger me. I don't want to be rude... just bluntly saying you triggered me. Doing something really small for A/D D/A with 8bit and not worrying much about resolution and data loss is one thing. For something massive scale the problem is a lot less trivial and a lot more mathematical.
Yeah they do all the time. I remember in my parallel computing course where we got to use our 800 core test PC back in grad school where people were running simulations of different weather patterns and climate change. Earthquake simulations and what not. A lot of that can be done taking advantage of all of those cores. Academia specifically heavily uses these to get closer to the "physics" with clear discrete limitations
1. Low latency network, 1-2us. Most servers can't ping their local switch that quickly, let alone the most distant switch for 1M nodes
2. High bandwidth network, at least 200gbit
3. A parallel filesystem
4. Very few node types.
5. Network topology designed for low latency/high bandwidth, things like hypercube, dragonfly, or fat tree.
6. Software stack that is aware of the topology and makes use of it for efficiency and collective operations,
7. Tuned system images to minimize noise, maximize efficiency, and reduce context switches and interupts. Reserving cores for handing interrupts is common at larger core counts.
I think that is his point. At most places I've worked at in my almost 15 years of experience, a senior title is not a title based on time but based on goal setting. Some places do that better than others for sure, but it's not just a magic you hit 7 years now you are a senior. A senior engineer in 3 years??? That's a SWE who is still taking baby steps in my experience.
This seems sloppy too. Is this based on anything scientifically rigorous? Where I live even startups will take on less profit for a quarter to train new graduates who have never touched an IDE on purpose. The point of training a junior isn’t to be immediately competitive. It’s to take over your job as you move on to other roles.
What? Nobody is defending their jobs here. It’s explaining how non-trivial this is. What is the point of your comment? If you wanted pure token to money amount and are too lazy to calculate it your self use the tool in the link. That’s not how the real world even with an LLM will work. Let’s assume the LLM writes everything, even gpt4 won’t get everything in 1 go now. Requirements of software are hard to put into pure English… why write this snarky message. Adds nothing to the conversation.
These comments remind me of newspaper articles saying the internet will fail and horse breeders saying cars will fail. All want to deflect to how wrong this is when it should be like the Drake Equation and can still give you data and ranges. With time and AI evolution, the estimates will get more close to this.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” -Upton Sinclair
You misunderstood what so many people here wrote with even more snark and a bad quote. It could and probably will replace my job and I have no contingency plan in place leaving me at great risk. That’s a given it seems. What isn’t is giving an estimate on a dollar amount and of time. Especially with any scientific rigor. The tool in the link is silly and that is what everyone is talking about. Perhaps you should try to be less bias reading the comments here.
> Even in cases like that, it's illogical to fear AI, because if it really was so simple to automate, it would've been done already.
This hasn't been my experience. There is a TON of low hanging fruit for automating workloads which exist in almost every large company. While not all automations are easy, there's no shortage of easy work in automation. Almost every financial organization I've worked at has had some amount of manual processing of overnight batch jobs.
I agree with you insofar as everyone with even a little experience at a large company knows there are a lot of apparently low hanging fruit possibilities for automation.
But do you mean you have seen things that seem simple to automate, or have you tried to do so and found out what happens?
Yes, plenty are absolutely trivial to implement. Things like "I move the file from this folder where it's dropped off by over night batch processing to this other folder and then I kick off the processing job" or "I copy these values into our master excel spreadsheet and take the results from the calculations and put them in this other system."
Extremely basic and low hanging fruit is all over the place.
It’s that good in your experience? Enough that it is close to writing something as nice as the Linux Kernel we have now? How close is close? Reading comments like these makes me feel like I’m swallowing crazy pills. It legitimately makes me feel like I’m using it wrong. I do Android and iOS native development and some IoT and it gives me really wrong code very often. To the point that I don’t see much difference between chatgpt and gpt4. But you say it’s close to just “write me a unix-like kernel”
Personally I couldn't even get it to draw me a regular pentagon in CSS. It was happy to draw hexagons and call them pentagons. It was even happy to go back and fix its mistakes when I informed it that it was making hexagons, not pentagons.
It, of course, readily accepted that I was correct and it had indeed drawn a hexagon, but this time it'd be different.
I don't think high-brow / low-brow is a useful framework for understanding these models. They are good at certain things and bad at others and those don't correspond neatly to what a human finds easy and hard.
It's not close and I also see no big difference between GPT 3.5 and 4. Don't get hyped. I am sure it will eventually happen, but calling it close is very optimistic.
Not close to as in "it can nearly write it correctly", but close to as in "I believe within a small number of years, gpt-4-like tools would be able to write you a unix-like kernel from scratch and have it actually work, with no human input".
I think we already have most of the pieces in place:
* big language models that sometimes get the right answer.
* language models with the ability to write instructions for other language models (ie. writing a project plan, and then completing each item of the plan, and then putting the results together).
* language models with the ability to use tools (ie. 'run valgrind, tell the model what it says, and then the model will modify the code to fix the valgrind error')
* language models with the ability to summarize large things to small.
* language models with the ability to review existing work and see what needs changing to meet a goal, including chucking out work that isn't right/fit for purpose.
With all these pieces, it really seems that with enough compute/budget, we are awfully close...
It could also be a case of Tesla's full self-driving that's perpetually 2 years away. Progress in AI isn't linear so you can't extrapolate based on historic data. We really don't know if we're just a small step away from AGI or if we'll be stuck with the current crop of LLMs, with only incremental improvements over the next decade.
That seems to also miss the intentionality that goes into some things in the kernel as well… I understand now you mean when a feedback of LLMs are improved on. I guess fair enough there, no idea if that will work till we see it. However I think the problem of a Unix-like kernel is a lot less trivial due to the human intentionality that goes into some choices as well as bit-banging optimization.
> human intentionality that goes into some choices
Many choices are made at design time to make the right tradeoffs between complexity, speed, etc.
But with AI-designed things, complexity is no longer an issue as long as the AI understands it, and you no longer need to think too much about speed - just implement 100 different designs and pick the one which does best on a set of benchmarks also designed by the AI.
Can any of these do any real reasoning? I feel this would just result in some Goldberg machine of producing terrible shit the big majority of the time, but may be impressive in some edge cases when the star constellations are right.
I agree. You can also go to to github.com/torvalds/linux click on code -> Download zip and have the Linux Kernel.
I’m not sure what having ChatGPT write it for you would give you. Maybe 10,000 hours of frustration after you realize how far from the mark you really are?