Hacker News new | past | comments | ask | show | jobs | submit | oDot's comments login

I spend most of my time writing and researching live-action anime, and I can say that for it to really blow up, there needs to be a paradigm shift.

Currently many of the efforts are trying to fit a peg into a square by either "westernizing" the content, as done in Hollywood, or completely jumping the shark as with most Japanese releases. Both only make for a less palatabl result.

Instead, a selective "offset" approach should be taken. I illustrate this briefly in a YouTube mini-essay:

https://m.youtube.com/watch?v=WiyqBHNNSlo

It is also how I write my IP:

https://www.weedonandscott.com/narrative/keepers-of-alteria

And it's how I assess exiting IPs for adaptation.

Until current stakeholders shake the current models out of their system, this field is leaving a lot of creative output (and profits) on the table.


> I spend most of my time writing and researching live-action anime

What do you mean by live-action anime? This reads to me as contradiction in terms. Anime is a term of art for CJK animation, especially Japanese animation.


Live-action anime is an oxymoron commonly used to describe live-action film or TV series based on anime or manga, or having similar style to them.

The term is not well-defined.

You could say it applies to Ghost in the Shell, because it is based on existing IP of the genre.

You could stretch it to include The Matrix, which is an original IP but employs a lot of the style

Edge of Tomorrow, even though based on a Japanese light novel, has deviated enough both in plot and style for the term not to apply


That’s interesting. Is this phrasing used by others or is it your own? I haven’t ever heard it before. As an anime/manga fan, it seems confusing and slightly grating. It feels similar to intentionally wrong or “not even wrong” engagement bait.

The closest thing I can think of that I myself could describe that way would be Alita: Battle Angel, due to the big eyes/small mouth style of Alita, but I would never actually call it live action anime, but rather, a live action anime adaptation.

Maybe this is related to modern speech patterns? The closest thing I can think of is “it’s giving x” which seems to me to be a shortened version of related phrase “it’s giving x vibes.”

On a related note, The Matrix actually has an animated spinoff The Animatrix, which has segments that are arguably anime, as they are produced by Japanese creators in a typical style.

However, I find that the phrase live-action anime perhaps says more about the person saying it and their perspective than it says about the content being described as such.


I could see why this rubs you the wrong way.

I have never thought of its origins before, but I'd assume it's a short hand for "live-action anime adaptation". Maybe this makes it more palatable for you?

As for Animatrix, yes -- it really went full circle, considering how they pitched the original feature:

> When Larry and Andy Wachowski were pitching The Matrix to their producers, they played them a DVD of an 82-minute Japanese cartoon and said: "We wanna do that for real."

https://www.theguardian.com/film/2009/oct/19/hollywood-ghost...


I think it’s mostly me. In another comment you mentioned the live—action Netflix One Piece adaptation as live action anime, and I think that description actually felt somewhat apropos in that specific case, due to the adaptation employing lots of CGI and special effects to ape the style and mannerisms of the anime.

I think finding the term live action anime slightly off is my own issue, as I also find the term Japanimation also slightly grating. It probably has something to do with being a fan of a historically somewhat mocked/maligned subculture.


Live action adaptions of anime and manga aren't even particularly new. There were plenty in Japan in the '90s (including pornographic ones). The earliest non-japanese adaption I'm familiar with was 2001's Meteor Garden in Taiwan, but I'd be surprised if it were the first.

Is this your day-job? That's impressive. Did you cry on the travesty of Netflix Cowboy Bebop?

No, not quite my day job yet.

I have stopped watching the live-action Cowboy Bebop after about 5 minutes, so much of it was spared.

The same production company (Tomorrow Studios) is currently making the live action One Piece, which, ironically, could be more detrimental to live-action anime than Cowboy Bebop was.

The reason is that even though the live-action One Piece is not something I'd consider live-action _anime_, it's actually quite good and is definitely well received, which may cause producers to adopt its formula, and stop researching alternative adaptation paths.


> travesty of Netflix Cowboy Bebop

Ugh. I just… it was so… ugh.


DHH mentioned they built it to move from the cloud to bare metal. He glorifies the simplicity but I can't help thinking they are a special use case of predictable, non-huge load.

Uber, for example, moved to the cloud. I feel like in the span between them there are far more companies for which Kamal is not enough.

I hope I'm wrong, though. It'll be nice for many companies to be have the choice of exiting the cloud.


I don't think that's the real point. The real point is that 'big 3' cloud providers are so overpriced that you could run hugely over provisioned infra 24/7 for your load (to cope with any spikes) and still save a fortune.

The other thing is that cloud hardware is generally very very slow and many engineers don't seem to appreciate how bad it is. Slow single thread performance because of using the most parallel CPUs possible (which are the cheapest per W for the hyperscalers), very poor IO speeds, etc.

So often a lot of this devops/infra work is solved by just using much faster hardware. If you have a fairly IO heavy workload then switching from slow storage to PCIe4 7gbyte/sec NVMe drives is going to solve so many problems. If your app can't do much work in parallel then CPUs with much faster single threading performance can have huge gains.


> The other thing is that cloud hardware is generally very very slow and many engineers don't seem to appreciate how bad it is.

This. Mostly disk latency, for me. People who have only ever known DBaaS have no idea how absurdly fast they can be when you don’t have compute and disk split by network hops, and your disks are NVMe.

Of course, it doesn’t matter, because the 10x latency hit is overshadowed by the miasma of everything else in a modern stack. My favorite is introducing a caching layer because you can’t write performant SQL, and your DB would struggle to deliver it anyway.


> Of course, it doesn’t matter, because the 10x latency hit is overshadowed by the miasma of everything else in a modern stack.

This. Those complaining about performance seem to come from people who are not be aware of latency numbers.

Sure, the latency from reading data from a local drive can be lower than 1ms, whereas in block storage services like AWS EBS it can take more than 10ms. An order of magnitude slower. Gosh, that's a lot.

But whatever your disk access needs, your response will be sent over the wire to clients. That takes between 100-250ms.

Will your users even notice a difference if your response times are 110ms instead of 100ms? Come on.


While network latency may overshadow that of a single query, many apps have many such queries to accomplish one action, and it can start to add up.

I was referring more to how it's extremely rare to have a stack as simple as request --> LB --> app --> DB. Instead, the app almost always a micro service, even when it wasn't warranted, and each service is still making calls to DBs. Many of the services depend on other services, so there's no parallelization there. Then there's the caching layer stuck between service --> DB, because by and large RDBMS isn't understood or managed well, so the fix is to just throw Redis between them.


> While network latency may overshadow that of a single query, many apps have many such queries to accomplish one action, and it can start to add up.

I don't think this is a good argument. Even though disk latencies can add up, unless you're doing IO-heavy operations that should really be async calls, they are always a few orders of magnitude smaller than the whole response times.

The hypothetical gains you get from getting rid of 100% of your IO latencies tops off at a couple of dozen milliseconds. In platform-as-a-service offerings such as AWS' DynamoDB or Azure's CosmosDB, which involve a few network calls, an index query normally takes between 10 and 20ms. You barely get above single-digit performance gains if you lower risk latencies down to zero.

In relative terms, if you are operating an app where single-millisecond deltas in latencies are relevant, you get far greater decreases in response times by doing regional and edge deployments than switching to bare metal. Forget about doing regional deployments by running your hardware in-house.

There are many reason why talks about performance needs to start by getting performance numbers and figuring out bottlenecks.


Did you miss where I said “…each service is still making calls to DBs. Many of the services depend on other services…?”

I’ve seen API calls that result in hundreds of DB calls. While yes, of course refactoring should be done to drop that, the fact remains that if even a small number of those calls have to read from disk, the latency starts adding up.

It’s also not uncommon to have horrendously suboptimal schema, with UUIDv4 as PK, JSON blobs, etc. Querying those often results in lots of disk reads simply due to RDBMS design. The only way those result in anything resembling acceptable UX is with local NVMe drives for the DB, because EBS just isn’t going to cut it.


It's still a problem if you need to do multiple sequential IO requests that depend on each other (example: read index to find a record, then read the actual record) and thus can't be parallelized. These batches of IO sometimes must themselves be sequential and can't be parallelized either, and suddenly this is bottlenecking the total throughput of your system.

I'm using a managed Postgres instance in a well known provider and holy shit, I couldn't believe how slow it is. For small datasets I couldn't notice, but when one of the tables reached 100K rows, queries started to take 5-10 seconds (the same query takes 0.5-0.6 in my standard i5 Dell laptop).

I wasn't expecting blasting speed on the lowest tear, but 10x slower is bonkers.


Laptop SSDs are _shockingly_ fast, and getting equivalent speed from something in a datacenter (where you'll want at least two disks) is pretty expensive. It's so annoying.

To clarify, are you talking about when you buy your own servers, or when you rent from an IaaS provider?

It's sad that what should have been a huge efficiency win, amortizing hardware costs across many customers, ended up often being more expensive than just buying big servers and letting them idle most of the time. Not to say the efficiency isn't there, but the cloud providers are pocketing the savings.

If you want a compute co-op, build a co-op (think VCs building their own GPU compute clusters for portfolio companies). Public cloud was always about using marketing and the illusion of need for dev velocity (which is real, hypergrowth startups and such, just not nearly as prevalent as the zeitgeist would have you believe) to justify the eye watering profit margin.

Most businesses have fairly predictable interactive workload patterns, and their batch jobs are not high priority and can be managed as such (with the usual scheduling and bin packing orchestration). Wikipedia is one of the top 10 visited sites on the internet, and they run in their own datacenter, for example. The FedNow instant payment system the Federal Reserve recently went live with still runs on a mainframe. Bank of America was saving $2B a year running their own internal cloud (although I have heard they are making an attempt to try to move to a public cloud).

My hot take is public cloud was an artifact of ZIRP and cheap money, where speed and scale were paramount, cost being an afterthought (Russ Hanneman pre-revenue bit here, "get big fast and sell"; great fit for cloud). With that macro over, and profitability over growth being the go forward MO, the equation might change. Too early to tell imho. Public cloud margins are compute customer opportunities.


Wikipedia is often brought up in these discussions, but it's a really bad example.

To a vast majority of Wikipedia users who are not logged in, all it needs to do is show (potentially pre-rendered) article pages with no dynamic, per-user content. Those pages are easy to cache or even offload to a CDN. FOr all the users care, it could be a giant key-value store, mapping article slugs to HTML pages.

This simplicity allows them to keep costs down, and the low costs mean that they don't have to be a business and care about time-on-page, personalized article recommendations or advertising.

Other kinds of apps (like social media or messaging) have very different usage patterns and can't use this kind of structure.


> Other kinds of apps (like social media or messaging) have very different usage patterns and can't use this kind of structure.

Reddit can’t turn a profit, Signal is in financial peril. Meta runs their own data centers. WhatsApp could handle ~3M open TCP connections per server, running the operation with under 300 servers [1] and serving ~200M users. StackOverflow was running their Q&A platform off of 9 on prem servers as of 2022 [2]. Can you make a profitable business out of the expensive complex machine? That is rare, based on the evidence. If you’re not a business, you’re better off on Hetzner (or some other dedicated server provider) boxes with backups. If you’re down you’re down, you’ll be back up shortly. Downtime is cheaper than five 9s or whatever.

I’m not saying “cloud bad,” I’m saying cloud where it makes sense. And those use cases are the exception, not the rule. If you're not scaling to an event where you can dump these cloud costs on someone else (acquisition event), or pay for them yourself (either donations, profitability, or wealthy benefactor), then it's pointless. It's techno performance art or fancy make work, depending on your perspective.

[1] https://news.ycombinator.com/item?id=33710911

[2] https://www.datacenterdynamics.com/en/news/stack-overflow-st...


You can always buy some servers to handle your base load, and then get extra cloud instances when needed.

If you're running an ecommerce store for example, you could buy some extra capacity from AWS for Christmas and Black Friday, and rely on your own servers exclusively for the rest of the year.


But the ridiculous egress costs of the big clouds really reduce the feasibility of this. If you have some 'bare metal' boxes in the same city as your cloud instances you are going to be absolutely clobbered with the cost of database traffic from your additional AWS/azure/whatever boxes.

Is database traffic really all that significant in this scenario? I'd expect the bulk of the cost to be the end-user traffic (serving web pages to clients) with database/other traffic to your existing infra a relatively minor line-item?

> I don't think that's the real point. The real point is that 'big 3' cloud providers are so overpriced that you could run hugely over provisioned infra 24/7 for your load (to cope with any spikes) and still save a fortune.

You don't need to roll out your own reverse proxy project to run services in-house.

Any container orchestration service was designed for that scenario. It's why they exist.

Under the hood, applications include a reverse proxy to handle deployment scenarios, like blue/green, onebox, canary, etc.

You definitely do not need to roll your own project to do that.


> I feel like in the span between them there are far more companies for which Kamal is not enough.

I feel like this is a bias in the HN bubble: In the real world, 99% of companies with any sort of web servers (cloud or otherwise) are running very boring, constant, non-Uber workloads.


Not just HN but overall the whole internet. Because all the news and article, tech achievements are pumped out from Uber and other big tech companies.

I am pretty sure Uber belongs to the 1% of the internet companies in terms of scale. 37Signals isn't exactly small either. They spend $3M a year on infrastructure in 2019. Likely a lot higher now.

The whole Tech cycle needs to stop having a top down approach where everyone are doing what Big tech are using. Instead we should try to push the simplest tool from low end all the way to 95% mark.


They spend considerably less on infra now - this was the entire point of moving off cloud. DHH has written and spoken lots about it, providing real numbers. They bought their own servers and the savings paid for it all in like 6 months. Now its just money in the bank til they replace the hardware in 5 years.

Cloud is a scam for the vast majority of companies.


I feel Uber is the outlier here. For every unicorn company there are 1000s of companies that don't need to scale to millions of users.

And due to the insane markup of many cloud services it can make sense to just use beefier servers 24/7 to deal with the peaks. From my experience crazy traffic outliers that need sophisticated auto-scaling rarely happens outside of VC-fueled growth trajectories.


You can’t talk about typical cases and then bring up Uber.

I mean most B2B company have a pretty predictable load when providing services to employees..

I can get weeks advance notice before we have a load increase through new users


IMO the biggest hurdle in frameworks like Svelte or Next isn't the framework -- it's the language.

This type of app is a prime use case for something like LiveView or a Go framework. Just today I had the most marvelous experience using Tailscale's ACP, where I've changed the ACL and it instantly saved it. It was so fast I had to make sure it's not optimistic UI, and sure enough, 78ms round trip for the request.

Even if it was a FE-heavy app using SQLite in the browser, I wouldn't have used JavaScript. After months of Gleam, I am spoiled.

The days of JavaScript-because-we-have-to are thankfully over. JS is now only for when the flexibility is required.


the reason I use JS is def not flexibility, it's to enhance the usability and interactivity of my app. even for my Python and Go web apps, I still inline JS to achieve the functionality I want. examples: client-side routing, pin the scroll to bottom, mutating classList, etc

Gleam compiles to JavaScript, so JS is not needed for that extra frotnend flare.

The flexibility to sprinkle some code here in there is definitely unique to JS


Yeah, that ability to improve usability beyond the basics is what he means by flexibility I think.

I agree with that sentiment. I can now build decent, working websites in something like Streamlit without touching JS (even if JS is being generated behind the scenes).


It didn't sink in yet that the killer app for ATProto is not Twitter, but YouTube.

If anyone is interested in exploring this, atproto [does this fool ai bots?] weedonandscott [I hope it does] com


ActivityPub does have https://joinpeertube.org for what it's worth. What would ATProto bring in specifically? Is it the ease of migration?


The biggest upside compared to PeerTube is probably discoverability. In ActivityPub, the network architecture means the video ecosystem is fractured and there’s no one cohesive place to find all PeerTube videos.

In atproto, the network is continually indexed by relays, which means that it doesn’t make a difference what app you use to watch videos - you’ll find the exact same ones regardless of the platform, since they’re all working from the same data.

This also means that different video platforms can provide different services for users without locking in users to their platform. Platforms would be forced to compete on what they provide to the user experience, not how well they can lock in users to their platform.


Exactly right.

Watch apps will compete on consumer-facing features like the recommendation algorithm -- maybe they'll offer several, or just one that differentiates them.

Hosting providers will compete on producer-facing features, like advertising, content policies, analytics, etc.

If a user is displeased with either, they can take all of their content/activity history and leave.


I rewrote my webapp, Nestful[0], in Gleam.

I use it in Vue components with Vleam[1]. About half of the frontend is rewritten at the moment, and it is a joy to use.

I'd do it again just for the error handling.

[0] https://nestful.app

[1] https://github.com/vleam/vleam


Yes, he's a fantastic storyteller


I have been using Gleam on the frontend in production, together with Vue[0] and I can confirm it's a joy to use.

Simplicity is such an advantage for a young language.

[0] https://github.com/vleam/vleam


Took a look at the Vite plugin, nice idea! Thanks for sharing


I own https://Nestful.app which I made for myself.

Nestful is first and foremost a "what should I do next" todo app, but I've found I've been using it instead of Obsidian, even though the note-taking features are much less superior.

This made me wonder if some people are document people, and some are list people. If such a classification exists, I'm definitely the latter.


Interesting. I use Apple Reminders in Kanban view to do that.


This is my current mental model for picking a language, considering just the language itself:

- Backends: Gleam or other BEAM

- Web frontend: Gleam

- Mobile apps: Dart + Flutter

- Specialized mobile apps: Swift and Kotlin

- Blazing fast: Zig

- Blazing fast and safest: Rust

- Fast performance and iteration: Go

Would love to hear additions and corrections.

My only problem with this is I'm not a fan of Go's syntax and I wonder what's a good alternative. Heared good things about OCaml but didn't check it out yet.


OCaml could almost replace all of those. I don’t think there is a BEAM compiler backend yet.

I’m not very experienced with BEAM, but could its features be delivered with a framework on top of a different stack? I know Akka is popular.


The OCaml backend for the BEAM was called Caramel but has been abandoned after his author went on working on hiw own build system, warp.


Build systems come for our best and brightest. :-(

Talk to your friends and family.


Actor model is a small component of the BEAM. Even then, it guarantees yields to the scheduler.

This is practically impossible to retrofit on to an existing language, esp. in the presence of loops.


GC precludes OCaml from replacing a chunk of those.


>>> don’t think there is a BEAM compiler backend yet

I think there's something close - https://caramel.run/manual/


The project has been abandonned : https://github.com/leostera/caramel/discussions/102


I’m super into elixir now and don’t see myself going anywhere else. Is gleam really that good? What are the advantages? Can I use liveview with it?


As far as I know Gleam is the only strongly typed language on BEAM, which is important to me since I don't like dynamically typed languages. But of course Elixir gets optional types now.


There is also:

Purerl - Erlang backend for PureScript, a few folks are using this in production - https://github.com/purerl/purerl

Caramel - Ocaml for Beam, seems dead - https://github.com/leostera/caramel

and more probably dead projects at https://github.com/llaisdy/beam_languages


Is that the only difference?


It can also compile to JavaScript (instead of the BEAM) so you can use the same language for backend and frontend work.


- Backends: Kotlin (on JVM)

- Web frontend: TypeScript (maybe Gleam in the future!?)

- Fast performance and iteration if I want a binary: Kotlin (native compiled)

- Blazingly fast and good for WASM: Rust

- languages that I keep an eye on: Gleam, Zig, Odin

- languages that I will never touch: C, C++

- languages that I think are quaint: OCamel, Lisp, Haskel

- languages that I have used in the past and that are fine: Dart

- languages that I have used in the past and that are ok: Java (if it had nullability, it'd be fine)


I love Kotlin, but don't want to use IntelliJ, and they obviously have strong financial incentives against supporting other IDEs. Has anything changed in this regard?

I appreciate their work on native/wasm, and I think it's great if they could be financially rewarded/sponsored for that work. It's just unfortunate that it has to be in the shape of an IDE dependency.


Nim fits a lot of Go's use cases. It is way more niche though.


A lot of Tailwind's value comes not from the utility classes, but from the design system they enforce on the user, which is the most valued feature Tailwind offers, in my opinion.


I’ve always had the idea of producing a CSS design system with just a bunch of preset CSS variables to use like ‘margin-left: var(—-p2);’, as it’s close to how I end up using Tailwind lately; ‘@apply ml-2;’


Why use @apply for this? The classes are already simple and standardized. Now if I want to modify a component style I have to go find the css file with the @apply rule, which is how css normally used to work & kind of defeats the point of Tailwind IMO. I also have to grep the whole codebase and make sure nobody was using `.card-bg` or whatever in an unexpected spot. Plus you have to deal with a mixture of bespoke CSS classes and tailwind ones. Seems like a hassle!


It’s indeed a hassle if you were to mix your own CSS classes and Tailwind, but I don’t do that. Instead I use Tailwind for its design system and set of defaults.

My own CSS is classic BEM style, so very component like.

In hindsight I could easily replace Tailwind and just produce a set of CSS variable as a design system.


Yeah, the advantage of limiting colours and sizes to preset multiples made life a lot nicer for a while. Very easy to keep things looking consistent without needing to re-reference specific values. But they keep making it more and more easy to do things like p-[7px] which rather defeats the point and there seems to be talk about changing the scales to just using px/rem numbers which feels like a complete abandonment of the idea.

Customise the tailwind config for your design system, add additional CSS in a css file for specific needs. Generating arbitrary values on the fly just muddies the system back up again.


Exactly. Bootstrap always had Utility Classes but they lost to Tailwind due to the focus on design and great examples/templates that Tailwind team focussed on early on and then the market took over.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: