Hacker News new | past | comments | ask | show | jobs | submit | brown's comments login

Love Postgres, but major version upgrades remain the one genuine pain point. One misstep and you're in backup-restoration hell. Solid guide.


There are two interesting "safety nets" at play here: the classic "Nobody ever got fired for choosing IBM" principle (where ubiquity creates an implicit guarantee of continuity), and a less visible but equally powerful one where certain open source tools become such fundamental infrastructure that they're essentially "too critical to fail." Think curl, gpg, or apt - most users never directly interact with these, but they're so deeply embedded in the internet's fabric that the ecosystem ensures their maintenance. One heuristic I've found helpful is looking at major corporate adoption patterns - it can be a decent signal for identifying which tools fall into these categories.


I highly recommend js13k (https://js13kgames.com), an annual game jam where your entire game must fit in 13 kilobytes. The tight size limit forces you to get creative with optimization - think procedural generation, custom minimal engines, and code golf techniques that actually matter. While the games might be "useless" in a practical sense, you'll learn more about low-level JavaScript optimization than from most serious projects, all while being part of an incredibly supportive community that loves sharing tricks and tips.


js13k is a game jam competition. Make a game that fits in a 13 kilobyte zip.

"The thirteenth, anniversary edition of the online js13kGames competition starts… NOW! Build a Web game on a given theme within the next month and fit it into a 13 kilobyte zip package to win lots of cool prizes, eternal fame, and respect!"


Medplum (https://www.medplum.com/) | Founding Developer Experience Engineer | SF | Full time

Medplum (YC S22) is an open source, API first, healthcare developer platform. "Headless EHR", we take care of the security, compliance, and regulatory burdens of healthcare software development. Well funded and growing fast.

We're hiring an amazing Dev-Ex / Dev-Rel engineer to delight customers, build sample apps, and promote the Medplum platform.

Tech stack: TypeScript, React, Node.js, AWS

Learn more: https://www.medplum.com/careers/devex-engineer



In that one balls seem to be able to punch through (and flip) squares (when going almost vertical) instead of bouncing on every square they hit.


It's a bug/choice that is present in the original as well.


That's definitely cool to watch, the patterns of territories get a lot more complex.


Spent a fair bit of time watching that one last night, thanks. I can definitely see why computer scientists were entranced by Conway's Game of Life.


“Training costs scale with the number of researchers, inference costs scale with the number of users”.

This is interesting, but I think I disagree? I'm most excited about a future where personalized models are continuously training on my own private data.


How can you disagree with that statement? Training takes significantly more processing power than inference, and typically only the researchers will be doing the training, so it makes sense that training costs scale with the number of researchers, as each researcher needs access to their own system powerful enough to perform training.

Inference costs scaling with the number of users is a no-brainer.

I'm pretty dumbfounded how you can just dismiss both statements without giving any reasoning as to why.

EDIT:

> I'm most excited about a future where personalized models are continuously training on my own private data.

This won't be as common as you think.


> typically only the researchers will be doing the training

Citizen LLM developers are becoming a thing. Everyone trains (mostly fine-tunes) models today.


Non-technical people will not be fine-tuning models. A service targeted at the masses is unlikely to fine-tune a per-user model. It wouldn't scale without being astronomically expensive.


We will need at least one- if not several- research and data capture breakthroughs to get to that point. One person just doesn't create enough data to effectively train models with our current techniques, no matter what kind of silicon you have. It might be possible, but research and data breakthroughs are much harder to predict than chip and software developer ergonomics improvements. Sometimes the research breakthroughs just never happen.


My favorite URL oddity has to be "id.me" for U.S. Citizen identity services.

Seems a bit odd to use a Montenegro domain, doesn't it?


It seems to be run by a third-party company that the government latched on to for some reason: https://en.wikipedia.org/wiki/ID.me


Nothing out of the ordinary for individual government departments to turn to private contractors when the GSA doesn't offer them a service they need when they need it.

GSA has since developed login.gov, but there hasn't been a mandate that other agencies have to use it over third-parties.


Ah I see! Hope there's one soon


There was another one (census, maybe? can't recall which agency it was) using a .gd for a while, too... don't see it on the list anymore. Not sure who signed off on putting government services behind the "control" of a country we've invaded before.


The domain name made me curious: http://10x.engineer/

"404 Not Found: 10x Engineers aren't real"

Well played.


For anyone who wants to slow the development of AI, copyright is the soft underbelly to go after.


Are you seriously arguing that stealing code is okay in the name of "AI development"?


I think their comment was to the contrary, that the copyright/legal implications of 'stolen' code could seriously hobble the wider development, proliferation, adoption, and commercialization of AI software.


Maybe I misunderstood, but the comment seemed to dismiss copyright issues as a cheap way to kill AI ("soft underbelly"). I think stealing code is a pretty serious deal and the onus is on AI software companies to make sure they aren't doing it; it's not "slowing the development of AI" to keep them accountable.


Soft underbelly isn't dismissive, they're just saying that it is the natural target to aim for.


"soft underbelly" is synonymous with "weak point" or "Achilles heel", it's in no way dismissive. If anything, it's the opposite of dismissive.


Are you seriously arguing that using short snippets open source code to inspire similar, yet not exactly the same, original code is "stealing code"? Human developers do that all day long. And just because a piece of code exists in a GPL project doesn't mean it originated there. Every algorithm or sort function likely originated in a more permissively licensed project before it got included in a GPL project.


What happens if I (a human) read GPL code and then reuse the knowledge gained from it in my own commercial projects? It's not as clear cut as you make it sound.


It could be as clear-cut as you've just made it: "a human". An LLM is not a human.

You could get into the semantics of "learning" - does JPEG encoding count as the computer "learning" how to reproduce the original image? But trying to create some metric for why LLMs "learn" and JPEG doesn't "learn" on the basis of the algorithms is a philosophical endeavor. Copyright is more about practicality - about realized externalities - than it is about philosophy. That's why selling cars and selling guns are regulated differently, despite the fact that you could reduce both to "metal mechanical machines that kill" by rhetorical argument.

Even from a strictly legal perspective, it actually is fairly clear-cut. The answer to "what if I (a human) read GPL code and then reuse the knowledge gained from it..." comes down to a few straightforward properties of the license. GPL doesn't cover "reduced to practice" as many corporate contracts do, so terms covering "the knowledge gained" are lenient. GPL covers "verbatim" copies which is what LLMs are doing, that's as clear cut as it gets. Inb4: "So what if I add a few spaces here and there?" - well, GPL also covers "a work based on"; this is where I (who am not a lawyer) can't speak confidently, but surely there are legal differences between "based on" and "reduced to practice", considering that both are very common occurrences in contracts, so there actually would be a lot of precedent.


I agree with you that verbatim copies are obviously covered by copyright. What if LLMS reproduce code with changed variable and function names (which would be a great improvement to `cs_gaxpy` in the original article)? What if just the general structure of an algorithm is used? What if the LLM translates the C algorithm from the original article into Rust? This discussion is only scratching the surface.


Copyright. Copyright. That is the issue. If you reproduce the code verbatim then you are in violation. This is what the AI is doing.

Just learning from the GPL code to make yourself smarter is not the problem.


It's going to be an uphill battle just to get people to even understand what the problems are. And this is even a technical forum. Now imagine trying to explain these nuances to a judge or jury.


It's not so much an ability to understand as it is a desire to not understand in order to be able to ignore the rightsholders' licensing terms.

Plenty of tech companies exist by putting a thin layer on top of the hard work of others and if those others can be ignored then that's what they'll do.


The example given in the article isn't verbatim.


I... don't see how you read what he said that way at all?


If you read a negative connotation into "slow the development of AI", that's what you get. It's how I'd interpret that comment, too.


Is not ok, but Microsoft couldn't care less (because they are not going to get fined).


yes, because they don't indemnify their customers

anyone sensible should stay the hell away from copilot until the fair use question is settled


Looks like they do.

https://github.com/customer-terms/github-copilot-product-spe...

4. Defense of Third Party Claims. If your Agreement provides for the defense of third party claims, that provision will apply to your use of GitHub Copilot. Notwithstanding any other language in your Agreement, any GitHub defense obligations related to your use of GitHub Copilot do not apply if (i) the claim is based on Code that differs from a Suggestion provided by GitHub Copilot, or (ii) you have not enabled all filtering features available in GitHub Copilot.


interesting

> If your Agreement provides for the defense of third party claims

do any of them?

it also states:

> You retain all responsibility for Your Code, including Suggestions you include in Your Code or reference to develop Your Code. It is entirely your decision whether to use Suggestions generated by GitHub Copilot. If you use Suggestions, GitHub strongly recommends that you have reasonable policies and practices in place designed to prevent the use of a Suggestion in a way that may violate the rights of others. This includes, but is not limited to, using all filtering features available in GitHub Copilot.

(contra proferentem would apply though)


I think it's pretty clear. If you're not filtering, you're liable. If you are and something transpires, they'll fight your legal battle for you which is probably better than any monetary indemnity clause. I assume this is for enterprise users where it actually matters.


> I think it's pretty clear.

without an Agreement in sight, it's not, the two terms conflict with each other as there's no clear precedence

(I think it's likely that there's an indemnity clause in any Agreement though!)


Looks like copilot is pretty upfront:

Matched content:

n ; Ap = A->p ; Ai = A->i ; Ax = A->x ; for (j = 0 ; j < n ; j++) { for (p = Ap [j] ; p < Ap [j+1] ; p++) { y [Ai [p]] += Ax [p] * x [j]

License Summary

This snippet matches 500 references to public code. Below, you can find links to a sample of 50 of these references.

NOASSERTION (405)

MIT (26)

GPL-3.0 (19)

BSD-3-Clause (16)

GPL-2.0 (11)

Apache-2.0 (7)

BSD-2-Clause (7)

LGPL-3.0 (6)

LGPL-2.1 (3)

File References

Match Location Repo License

ChRis6/circuit-simulation Unknown license

AndySomogyi/SuiteSparse Unknown license

ru-wang/slam-plus-plus Unknown license

Cruvadio/invariant_measures Unknown license

nishant-sachdeva/rrc-g2o Unknown license

alecone/ROS_project Unknown license

gustavopr/HANK Unknown license

lcnbeapp/beapp Unknown license

imod-mirror/IMOD Unknown license

clach/MPM Unknown license

MagicPixel-Dev/cxsparse Unknown license

elshafeh/own Unknown license

squirrel-project/squirrel_nav Unknown license

lcnhappe/happe Unknown license

cix1/OpenSees Unknown license

pachamaltese/dulmagemendelsohn Unknown license

gina10287/Interactive-shape-manipulation-FinalProject Unknown license

GuillaumeFuchs/Ensimag Unknown license

w2fish/CSparse Unknown license

cffjiang/cis563-2019-assignment Unknown license

diesendruck/gp Unknown license

hendersk101401/jlabgroovy Unknown license

robhemsley/SuiteSparse Unknown license

Glaphy/Emission Unknown license

Glaphy/Emission Unknown license

daves-devel/ECE1387 Unknown license

anranknight/TE Unknown license

weigouheiniu/TE Unknown license

Datoow/fm Unknown license

chaoyan1037/openMVG_modified Unknown license

cran/igraph Unknown license

GHilmarG/UaSource Unknown license

ZhaoqunZhong/Kalibr-ubuntu18-ros-melodic Unknown license

hechzh/g2o Unknown license

Open-Systems-Pharmacology/OSPSuite.CPP-Toolbox Unknown license

elshafeh/own Unknown license

yizhang/riotstore Unknown license

sgalazka/porr_mtsp Unknown license

skydave/sandbox Unknown license

alitekin2fx/orb_slam2_android Unknown license

Tianhonghai/vslam14_note MIT

LRMPUT/PlaneSLAM MIT

albansouche/Open-GeoNabla GPL-3.0

khawajamechatronics/mrpt-1.5.3 BSD-3-Clause

3000huyang/suitesparse-metis-for-windows BSD-3-Clause

igraph/igraph GPL-2.0

LRMPUT/DiamentowyGrant Apache-2.0

kurshakuz/graduation-project BSD-2-Clause

ghorn/debian-casadi LGPL-3.0

rmcgibbo/tungsten LGPL-2.1

Looks like this code in https://github.com/ChRis6/circuit-simulation/blob/2e45c7db01... is older then the GPL code in question or provided by the example. Uh oh, did we discover something? Who actually owns this code because this code predates the code in question by a calendar year using git blame, by a different author, and with no license attached to the oldest code. Is it possible the code in the codeium.com example is relicensed and not GPL code at all?


The comment is arguing quite the opposite.


Training AI on code is clearly not the same as stealing it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: