I second that, having just relatively recently used the native browser APis for image processing. While it felt a bit awkward to use, it served its purpose pretty well.
If I needed more, I would probably not use Go anyways, but a sharper tool instead.
Knew what absolute disaster of a video this was going to be before clicking. Highly recommend watching Colin's videos, this one included, for the sheer level of "this is clearly a bad idea, let's do it" that he gives off and the things learned along the way.
If we're thinking of the same picture: That kid is real, it has a severe condition that needs medical attention. It's not the general case, but an extreme one, but that doesn't make the problems any less urgent.
A while ago I saw a photo of Nancy Pelosi on the Congress floor with a walker. I later learned the picture was deepfaked. But in a way it was more true than a real photograph.
That's like saying the root of malnutrition is humans needing to eat. Perhaps technically true, but what's the point of pointing out that technicality? It does nothing to help address the problem.
we live in a globalized world of people competing to get money. hoping everyone begins singing kumbaya is nice but far from a practical solution or diagnosis of the problem.
I‘m not against coping strategies, they are useful. I‘m against scamming and dishonesty. I also don’t think ignoring the actual problems is the way to go, on the contrary.
When a problem has no solution then it's not actually a "problem" but rather a fact to be accepted, like gravity. Regardless of new AI tools, there is no conceivable solution to the problem of having millions of scammers operating from jurisdictions without effective law enforcement. Time to start coping.
If „select *“ breaks your code, then there‘s something wrong with your code. I think Rich Hickey talked about this. Providing more than is needed should never be a breaking change.
Certain languages, formats and tools do this correctly by default. For the others you need a source of truth that you generate from.
I don't see anything wrong with what the article is saying. If you have a view over a join of A and B, and the view uses "select *", then what is gonna happen when A adds a column with the same name as a column in B?
In sqlite, the view definition will be automatically expanded and one of the columns in the output will automatically be distinguished with an alias. Which column name changes is dependent on the order of tables in the join. This can absolutely break code.
In postgres, the view columns are qualified at definition time so nothing changes immediately. But when the view definition gets updated you will get a failure in the DDL.
In any system, a large column can be added to one of the constituent tables and cause a performance problem. The best advice is to avoid these problems and never use "select *" in production code.
Seems like a database failure if it can't notify you that introduced a breaking change. All of the schema information is available to the database after all, so it should be able to tell you about the duplicate column breaking that view.
`select *` is bad for many reasons, but the biggest is that the "contract" your code has with the remote data store isn't immutable. The database can change, for many different reasons, independent of your code. If you want to write reliable code, you need to make as few assumptions as possible. One of those assumptions is what the remote schema is.
A column changing its data type is generally considering a breaking change for the schema (for obvious reasons), while adding more columns isn’t. Backwards-compatible schema evolution isn’t practical without the latter — you’d have to add a new secondary table whenever you want to add more columns.
This mirrors how adding additional fields to an object type in a programming language usually isn’t considered a breaking change, but changing the type of an existing field is.
If you have select * in your code, there already is something wrong with your code, whether it breaks or not: the performance and possibly output of your code is now dependent on the table definition. I'm pretty sure Rich Hickey has also talked about the importance of avoiding non-local dependencies and effects in your code.
The performance and partly the output of the code is always dependent on the table definition. * instead of column names just removes an output limiter, which can be useful or can be irrelevant, depending on the context.
Though sure, known to negatively affect performance, I think in some database systems more than in others?
Now there is a difference from what the article is talking about and what you are talking about and I think that's quite important, because we tend to mix these things up often.
The article describes domain modeling, what you describe is computational modelling. The former lives at a higher abstraction closer to the user. The latter is about data processing.
A lot of people have mentioned DDD (or similar) in this thread, but I think that is an example of mixing up computational modeling and domain modeling. I think this is what object orientation and its descendants like micro services generally have been doing wrong: Applying domain structure at a level where it makes no sense anymore. This mismatch can add a lot of friction, repetition and overhead.
https://linux.die.net/man/3/setsockopt
Zig has a posix API layer.