Hacker News new | past | comments | ask | show | jobs | submit | more wutwutwutwut's comments login

Comparing to Sweden, 1 and 2 could happen here as well. But since government subsidies costs over 1200 SEK ($120), the pharmacy will always suggest the cheapest alternative (by law). If doctor prescribes something for $2 but an alternative exists which is only $1 then pharmacy will suggest the one for $1.


What is the hype train you associate with typescriptlang.org? "Lang"?


I associate it with "golang", the search-optimized way to refer to the language Go, from the world's largest search company.


Before golang there was rubylang, clang, racketlang, dlang, slang, etc. Seems like a pretty common convention for language websites not limited to a single organization. Be used “Kit”, but I’m not familiar with anyone else that did (and NeXT used it long before they did).


Sure. But would you refer to that as the "hype train"?


Because the term is not allocated specifically to Apple.


Sure, but it has a 30 year lineage in their active product line. It would be like prefixing a library with Direct in the early 00s. It seems ignorant and derivative.


Is it not possible to do serious UI work without fonts adapted to 3D?


I would say that the majority of people don't notice StereoKit uses regular raster for font rendering! However, that number is definitely not zero, and there's definitely people that can spot it right away, as evidenced. Designers in particular would probably spot it right away.

I'd say it's primarily a question of polish and feel! Those that can't spot it right away would still prefer SDF fonts in a side-by-side, and the difference becomes much more clear if you start getting your head really close to the text. I consider it to be an essential feature for a 1.0 version, but not a blocker for right now.


I actually worry more about looking at longer and smaller text in 3D. It is noticeably harder to read, which prevents many use cases. For instance, you would not read HN in VR using an RSS reader built with StereoKit.


Ah yeah I think there's a collection of issues related to small text within XR! The biggest one is really pixel density on XR headsets, which is always an issue for small text, or far-away text. XR also can't benefit from sub-pixel approaches to font rendering either. Since you can always move further from text, there's also the IRL issue that the text is just too far to make out!

The current raster technique should actually be fine for smaller text, there's some super-sampling going on in the shader that should help a bit here too! I'm not entirely certain what else could be improved on in this case besides font selection, I'd be open to suggestions or tips :)


I am not an expert in the area, just somebody who read up on it after realizing desktop in VR does not work out for me (and usually low res is not a problem, so it is about font aliasing).

Something like https://github.com/servo/pathfinder (newer) and as you mentioned SDF (older) are the latest approaches.


Thanks for the link! The path based approach has been of interest to me, but I wasn't aware of an implementation other than Slug, which I have been avoiding. Guess I can finally do some real research on that topic now! :D

SDF is probably still the most realistic path for now, but I guess that may depend a bit on what comes up :)


What language are you programming in? Just thinking about how this would work if you are using say React.


Lots of different ones, Perl, Python, C, C++ being the main ones. I think there is JavaScript but I don't work on those parts. Looks like node-react is available in Debian now.

https://packages.debian.org/sid/node-react


Alright, cool. It's a bit confusing to me to create a dependency from your programming language dependency system to the operating system. But whatever works for you.

Wouldn't it be tedious to repackage libraries in Debian format? Why not use cpan or similar directly? Or a local artifact server already supporting existing package formats.


Dependencies across different language ecosystems exist (for eg Python stuff often depends on JavaScript stuff for documentation, or C libraries for faster machine code), it is convenient to encode them all in one package manager format. Repackaging ecosystem libraries is pretty much automated these days but still exposes you to the internals of each library since you need to do QA on everything to get it up to Debian standards first. Once things are in Debian you get an entire community of QA too, checking for new build failures due to changes in other packages, notifying you of new security issues etc.

https://wiki.debian.org/AutomaticPackagingTools


Good thing that you could announce that prideness on this other social media then. ;)


> It's not very fair to say "upgrades on Postgres are hard" if that's practically always true in your use-case, that's all.

That is just nonsense. If it's hard then it's hard. It's not easier because it's practically always hard.

What's next, it's not fair to say that flying to the moon is hard because flying to LEO is tricky? Oh my.


I guess the point they're trying to make is that phrasing as "Postgres upgrade is hard" is not fair because it gives the impression it's a weakness in Postgress, but the use case would be hard in any database.

It would be more fair to say "Zero-downtime database upgrade is hard"


I wouldn’t say that MySQL is a better database but upgrades with it are very easy: you backup and then you upgrade the packages and the new packages run the upgrade command.


Yea, exactly as on PG.

What about replicated and zero-down time (the current thread context)? That's also hard with MySQL.


It's actually not too difficult in MySQL, in terms of the mechanical steps required. MySQL's built-in replication has always been logical replication, and they've made major ease-of-use improvements in recent years, for example:

* efficient binary data clone is a single command

* starting replication with GTID positioning is a single command

* data dictionary / metadata upgrades happen automatically when you start a newer-version mysqld

The hard part of major-version upgrades in MySQL is testing your application and workload, to ensure no deprecations/removals affect you, and checking for queries with performance regressions. Tools like Percona's pt-upgrade and ProxySQL's mirroring feature help a lot with this.


haven't used mysql in a couple of years but its replication (all methods) used to have a whole page of gotchas and idiosyncrasies with various corner cases.

They also introduced binary file format changes even with minor and patchlevel version number changes and downgrading stopped being supported. afaik in that case had to restore from backup.

it's just the exact opposite of postgres' upgrade guarantees.


It's hard to respond without any specifics, but in my personal view it's a pretty solid replication implementation in modern versions. Sure, there are some gotchas, but they're not common.

Realistically, every relational database has corner cases in replication. There are a lot of implementation trade-offs in logical vs physical, async vs sync, single storage engine vs pluggable, etc. Replication is inherently complex. If the corner cases are well-documented, that's a good thing.

I do totally agree the lack of downgrade support in MySQL 8 is problematic.

Postgres is a really great database, don't get me wrong. But no software is perfect, ever. Consider the silent concurrent index corruption bug in pg 14.0-14.3 for example. If something like that ever happened in MySQL, I believe the comments about it here would be much more judgemental!


I thought mysql could do replication across a major version? If so, that's big differentiator



Meh. That's an argumentation which is just a waste of time. This is a post about Postgres. Thinking about whether or not the title is "fair" towards the software itself is ridiculous and a waste of everyone's time. The title is correct and the software won't feel bad about it. Move on.


Would you consider that a piece of cake? I have doubts. The page you linked to describes that if you follow the instructions on many other blogs you'll have data loss. Not a symptom of a piece of cake process


If you need zero downtime, you already in a field where NOTHING is a piece of cake. Not your network, not your computing, not your storage, and i haven't even talked about the human aspect of "zero downtime".


Alright. But the GP comment claimed upgrading PG was a piece of cake. You claim it is not a piece of cake. So it sounds like you agree that the claim that upgrading PG was a piece of cake was misleading.


> the GP comment claimed upgrading PG was a piece of cake. You claim it is not a piece of cake. So it sounds like you agree that the claim that upgrading PG was a piece of cake was misleading.

GP claimed upgrading was a piece of cake, not that zero downtime upgrades are a piece of cake. The two claims aren’t interchangeable. The simple upgrade path is always available, though it may have downtime consequences you personally are unwilling to accept. And the complex upgrade path is complex for reasons that have nothing to do with PostgreSQL - it’s just as complex to do a zero downtime upgrade in any data store, because in all cases it requires logical replication.

So if anything it feels like you’re the one being misleading by acting as though GP made a more specific claim than they actually did, and insisting that the hard case is hard because of PG instead of difficulty that’s inherent to the zero downtime requirement.


So if I tell you that upgrading pretty much all databases is a piece of cake but not include the criteria "unless you want to keep your data" you would say that is a fair statement?

If you claim that process X is trivial one has to make some assumptions, right? Otherwise I could claim that going to the moon is trivial but leave out "assuming you have a rocket, resources, people and anything else you may require".

Claiming that something is a piece of cake as a broad statement without any details is meaningless at best.


> So if I tell you that upgrading pretty much all databases is a piece of cake but not include the criteria "unless you want to keep your data" you would say that is a fair statement?

Incredibly bad-faith comparison, this.

Many, many datastore deployments can tolerate 10 minutes of downtime every 4 or 5 years when their PG install finally transitions out of support. Data loss isn’t even in the same universe of problem. It’s reasonable to talk about how easy it is to upgrade if you can tolerate a tiny bit of downtime every few years, since most people can. It’s utterly asinine to compare that to data deletion.


So postgresql upgrades are a piece of cake if your users doesn't care if your system is available to them. Is this some paradoy of the older claim that MongoDb is a great system if your user doesn't care if their data is lost?


Do you actually have such an uptime requirement or do you think you have such an uptime requirement? How are you conducting it with other databases?


Yes, if the software I work on doesn't work for 5 minutes we have a ton of tickets from customer. We have tens of thousands of customers who pay for it. Not being able to shut down your system for an hour isn't exactly a unique requirement. Technically our SLA is 10 minutes to perform jobs but our customers don't wait that long before creating tickets.

We pay Microsoft to perform upgrades transparently for us. They have multiple servers and they shuffle transaction logs and traffic between them to ensure availability during outages and upgrades. There are 6 copies of each database in different regions. Not sure how that is relevant, though?


It is relevant, reread and understand the parent comment. Get a managed pg and you'll have the same thing.


That's the point.

It's not a weakness in Postgres.

Managing upgrades in a highly available, close to 100% uptime database is hard.

If you want piece of cake, outsource this service and be happy enjoying the database features working as you please.


Yes, that's what I was saying :)


I believe you are the one who need to reread the thread. The person I replied to claimed that "Postgresql upgrades are a piece of cake."

But it is not. It's complex to do properly, just like with most if not all other databases. Claiming that it is easy as is just ignorant and spreading such misinformation is bad.


I never claimed it's easy, I just said get a managed pg, offload it to someone else and then it's easy yeah. I'm wondering the same as other commentator above me, what's the difference in other unmanaged dbs?


even four 9's give 50 minutes a year to do maintenance work.

i am a fan of planning for maintenance. planned downtime is definitely accepted by most of the population. i mean what you gonna do?

much better a planned downtime than an outage. leaving a system to rot just because the architecture is "don't touch it while it's working" is a sure recipe for a disaster of some kind.


> even four 9's give 50 minutes a year to do maintenance work.

Just the security patching of the host OS will likely consume a bunch of those minutes.

Not sure what point you were trying to make apart from that. I am not advocating that people should leave system to rot.


Haha. I would guess that >95% of NET developers have no idea what you are even talking about.

This really seems like the recurring bubble where posters on HN somehow things that a large majority of the industry is hitting F5 on HN while they are instead busy doing actual work.


You're probably right. I'm not a developer, just a tech nerd, so it's a mistake to assume others would care about minutia.

Regardless, I think it is still fair to say that if any other language platform was withholding features like a debugger, nobody would consider those platforms particularly open.


I missed this "debacle" as well, the HN story on it in October only had 59 comments, of which about half a dozen were actually about the topic itself. Not much outrage here apparently.

> ... if any other language platform was withholding features like a debugger, ...

Except "withholding a debugger" is not the same as "a complex new feature that builds on top of the existing debugger will only be released on one platform to begin with" is it? Features in beta releases aren't guaranteed to in the final release, even if some users really liked it.

Visual Studio already had a lot of code relating to editing program state when the debugger is paused after hitting a breakpoint. Seems plausible that Hot Reload in VS would be able to leverage some of this code to reduce the amount of work required to build, test and deliver the feature on that platform. It would also mean getting feedback and issues more relevant to the core of the feature, because the integration and UX is an evolution of code that's been there pretty much since VS was first released.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: