Hacker News new | past | comments | ask | show | jobs | submit login
A whirlwind tour of the LLVM optimizer (slideshare.net)
107 points by matt_d on May 12, 2023 | hide | past | favorite | 68 comments




I was about to go insane having to view an ad every 3 slides(which sometimes is the same slide). Thanks.


I maintain a high-performance JIT compiler used by large corporations around the world to run complicates business rules for logistics and optimisations. It uses LLVM as the backend for machine code generation.

I mostly disable the optimisations provided by LLVM. Because it makes code generation much slower with almost no performance gains. Writing high level optimisation passes before converting the AST to LLVM is what made the generated code super fast.

LLVM has no knowledge of the semantics of the language. So it can only optimise low level details that the higher level optimisations will get rid of anyway.

It seems to me that the LLVM optimisations are only of benefit to you if you generate really bad LLVM code in the first place.


This should be a fairly unsurprising result. High level languages are fundamentally more optimisable than low level ones because, as you say, the latter express unnecessary constraints, lacking information about domain-level semantics. Low-level optimisations are also known to follow a pareto distribution wrt their efficacy: 20% of the optimisations are responsible for 80% of your performance (if not more). See for example mir - https://developers.redhat.com/blog/2020/01/20/mir-a-lightwei...

That said:

> It seems to me that the LLVM optimisations are only of benefit to you if you generate really bad LLVM code in the first place.

Much llvm development is sponsored by large corporations for whom it really is worth it to squeeze that last 1%.


I guess your generated code contains a lot of black boxes that the optimizer can't see through. At my work, the LLVM optimizer is doing an insane amount of optimizations.


Rust is very optimizable by LLVM, but still has similar issue with performance. It's costly to optimize overly verbose/inefficient LLVM IR. Rust ended up implementing its own (MIR) optimization passes before LLVM to generate more optimized IR for it.


MIR optimization doesn't just move work from LLVM to Rust, it saves work, because it runs before monomorphization (template specialization in C++).


JITs are generally one of the most challenging places to use LLVM, exactly because of its bad compile-time characteristics. There are some successful uses of LLVM based JIT compilers (e.g. Azul's Falcon JIT), but this is definitely a use case where you can't just use the standard optimization pipeline. You'll generally use a custom pipeline and likely only use LLVM for the second stage JIT compiler.

That said, I don't think your statement that LLVM optimizations only benefit you if you generate bad input IR is correct. It just sounds like they are not useful for your specific problem domain.


Out of the box LLVM optimizations won't do much if you put work in preoptimizing. I think this is even mentioned in docs. IIRC they say that the generic optimizations only really work on naive IR generation and that more mature projects will probably not find them useful and instead create their own transforms - if you're mainly using the JIT then your approach seems the best. Running the pre-bundled optimizations is really a brute force approach to optimization. It works great for just getting things going but you outgrow it pretty quick.


This is the exact problem MLIR intends to solve.

LLVM optimizations are only really effective for simplifying integer expressions and loop patterns seen in code generated from imperative languages.


Surely LLVM's inlining heuristics must be one of its strengths. I thought good inlining was almost all of optimisation these days, based on a Chandler Carruth talk on LLVM


If you are compiling a language like C++ and care a lot about code size, yes.

Most compilers don't have to, and then you just inline whatever you can until some estimated register pressure is reached at the call site.


Is MLIR practically good at anything other than loop patterns in AI and HPC acceleration?


It's more of a 'bring your own optimizer' kind of a framework.

The idea is that, you know best what optimizations work for your domain.

But a compiler needs a large amount of engineering for things which are not optimizations.

MLIR makes it possible to get this infra (developed utilizing lessons from LLVM and other compilers) for free and share improvements among multiple compilers without pulling your hair out trying to understand misleading academic papers.


Can you elaborate on the extent to which you use LLVM optimizations? Is it comparable to e.g. clang's -O1/2/3?


I think this is fairly unsurprising.

My (partial/incomplete/buggy/experimental) Ruby compiler generates awful code, and still by far the biggest performance bottleneck is creation and garbage collection of objects that improving the low level code generation will have only marginal effects on.

E.g. finally adding type tagging for integers (instead of creating objects on the heap) sped up compiling itself by tens of times (taking it from unusually slow to comparable to MRI on that specific task) and there's nothing a low level optimizer will do to figure out transformations like that.

Maybe one day I'll get far enough on fixing the high level issues that it'll be worth even trying to do more complex low level optimizations, but that's a long time away.


Is any information on this public? I've believed for a while that domain specific optimisations are the right way to go but haven't found many examples of it in practice.

LLVM is largely (at least originally) tuned for clang's output which tends towards simple IR that LLVM will clean up later, with a fair bias towards making numerical benchmarks run faster.


Question to pros: Things like constant propagation, loop unrolling, TCO, memory reuse etc., seem simple in theory. Yet why do languages end up depending on LLVM and not implement these algorithms on their own and avoid a monstrous dependency?

Like what exactly is the implementation of the top 20 optimizations? It can’t be that high right..


>Like what exactly is the implementation of the top 20 optimizations? It can’t be that high right..

I love the full-throated arrogance of this - "I have no idea but I'm sure it's not hard" lol

So I'll tell you - each pass is at minimum a paper (mem2reg) and at maximum several dissertations (polly) to understand. Then have fun implementing a basic basic alpha in your compiler that doesn't have all the creature comforts of LLVM - iterators, efficient data-structures, debugging aids, etc. And then give up because a paper or a dissertation is out-of-date like as soon as it's in deadtree form, while LLVM is a living, breathing, tapestry of insights, breakthroughs, and hacks too.

I highly recommend you attempt even one pass to get a feel: pick any language that has a parser impl and design an IR for it and then optimize it (designing an IR from scratch can't be that hard can it?)


Yet, Chrome has given up on using LLVM for V8 JIT exactly because its slowdown wasn't worth the performance gains they were getting out of it.

The ART team has taken a similar deciion regaring AOT/JIT toolchain for native code compilation.

Adopting LLVM isn't always a success story.


They have probably made an informed decision. Instead of, "how hard can it be to write optimization passes and different architectures? Like one or two sprints?"


(tbc I was forthcoming with my arrogance, like it genuinely didn’t make sense to me)

Why would constant propagation need a PhD level proficiency to implement? It’s basically running part of your code and no different from parsing.

Again not trying to trivialize or be rude. I wish this was something I could ask ChatGPT and trust its outputs but you know I can’t


The Programmers’ Credo: we do these things not because they are easy, but because we thought they were going to be easy

https://twitter.com/Pinboard/status/761656824202276864


A language is like the most flexible user interface possible - anything can be combined with anything else. Imagine how many edge cases, exceptions, quirks, etc you would have to manage and have it still work. Sure you can limit your scope and say "for these constraints, the optimization will give 10x speed up", but in practice that won't cover many cases. For any optimization process to be worth it, it has to applicable in a wide range of situations (simplifying I know) while guaranteeing no ill side effects. Proving algorithms, especially ones that have to function in the real world, is very difficult.


For example, to evaluate something as simple as 1.3+2.4, you need an exact model of your target architecture's fp unit.

If you want to cross compile, generate code for x86 while running the compiler on anything other than x86, this means you need to implement 80-bit long double fp in software.

If you want to cross compile for PowerPC, you get to build software support for their bespoke "double double" format, which is not IEEE754 128 bit but a totally different thing.

Some GPUs (and also CPUs) flush fp denormals, so you also have to handle this in your constant folding.

And all that is just to handle addition, one small part of constant folding.


You're making assumptions about the input languages guarantees about floating point math here. That illustrates the challenge with LLVM: It needs to be complex because it aims to support a very large set of both inputs and outputs. What you say is true in the general case. It may or may not be true for a given compiler depending on input language and output architectures. And, indeed on whether you need to care about cross compilation.

It makes an important point that one needs to be more precise when evaluating complexity.


>Why would constant propagation need a PhD level proficiency to implement?

This is false premise or one of those other ones - constant prop might not be complex but constant prop isn't in the top 20 perf codegen relevant passes.


They are not hard assuming you constrain the problem. LLVM is complex because it's trying to generalise to support a huge number of input languages and a huge number of outputs.

The upside is that it lets you write generic optimisation passes. The downside is complexity.

It's still an open question whether or not that is a good approach in general vs. more specialized code generators and accepting that this means writing specialized optimisation passes.

Your answer to that is likely to vary greatly based on goals and which languages and architectures you care about supporting.


> ask ChatGPT and trust its outputs

If anyone's wondering, its response to the GP comment is 100% correct as far as I can tell. Only problem is that it's missing technical details and has a hard time giving specific examples.

---

There are several reasons why programming languages might choose to rely on LLVM or other external compilers and optimization frameworks rather than implementing optimizations on their own:

Focus on language design: Many programming languages prioritize language design and functionality over compiler optimization. Implementing optimizations can be a complex and time-consuming process, and language designers may prefer to focus on other aspects of the language.

Difficulty of optimization: While some optimizations may seem simple in theory, they can be difficult to implement in practice, especially for complex programs. Additionally, different optimizations may interact with each other in unpredictable ways, making it challenging to optimize code effectively.

Targeting multiple platforms: Many programming languages are designed to be portable and run on multiple platforms. Relying on external compilers and optimization frameworks like LLVM can help ensure that code is optimized for each platform without requiring extensive platform-specific optimizations.

Access to expertise: Compiler optimization is a specialized field that requires a deep understanding of computer architecture, algorithms, and programming languages. By relying on external frameworks like LLVM, programming languages can leverage the expertise of experienced optimization engineers and researchers.

As for the implementation of the top 20 optimizations, the specific details will depend on the optimization and the programming language being used. However, many optimizations involve analyzing the code to identify opportunities for improvement and then applying transformations to the code to make it more efficient. Some optimizations, like constant propagation and loop unrolling, may involve relatively simple transformations, while others, like register allocation and instruction scheduling, can be more complex.

--- snippets from "Use highly technical details and examples to answer this question.":

Memory reuse involves optimizing the use of memory to reduce the number of accesses and minimize cache misses. This can involve techniques such as loop-invariant code motion, which moves code that does not depend on loop variables outside of the loop, or loop fusion, which combines multiple loops that iterate over the same data into a single loop. However, these optimizations may also depend on the specific architecture being targeted, as the optimal memory access patterns may differ depending on the cache size and associativity

Writing a high-quality compiler that can handle all of the edge cases and produce optimized code for a wide range of architectures requires a significant amount of expertise and resources. Additionally, using an external compiler can provide a degree of portability, as the same code can be compiled for multiple architectures without requiring changes to the code itself

--- snippet from "Why would constant propagation need a PhD level proficiency to implement? It’s basically running part of your code and no different from parsing.":

Constant propagation is indeed a relatively simple optimization technique in theory, and it does not require a PhD level proficiency to implement it. In fact, most compilers include some form of constant propagation as part of their optimization pipeline, and many of these implementations are relatively straightforward.

However, there are several factors that can make constant propagation more challenging to implement in practice. Some of these factors include:

Handling interactions with other optimizations: Constant propagation can interact with other optimizations in non-trivial ways, such as dead code elimination and loop unrolling. Ensuring that these optimizations work together correctly can require careful analysis and testing.

Dealing with complex data structures: In some cases, constant propagation may need to deal with complex data structures, such as arrays or pointers. Ensuring that constant propagation works correctly in these cases can require more advanced techniques, such as dataflow analysis or points-to analysis.

Supporting language features: Different programming languages may have different rules for when constant propagation is allowed or required. For example, some languages may allow constants to be defined using complex expressions or function calls, while others may require constants to be literal values. Supporting these language features can require additional analysis and code generation logic.


Doing a good job of constant prop means doing a thorough job of implementing the best algorithm in the best paper. And to do that, you need to build a form of SSA. And to do that you need to compute dominators. And to do that, you need to build an appropriate form of the CFG. Don't need a PhD to do all those things (not inventing anything new here, just using other peoples' beautiful results), but it's still plenty of work to build a fast & thorough implementation, with lots of testing required.

Can't possibly be economical; but of course, the experience will teach you a lot.


Other than the first response, which seems to be just the most basic eli5 way of answering the question (and probably not helpful to original question), this has some pretty solid answers on the difficulty that might not be obvious to someone who hasn't worked with LLVM or built a programming language before.


> (designing an IR from scratch can't be that hard can it?)

I'm not GP, but I am someone designing an IR. No, it's not that hard. A lot of work, yes, but not that hard.


Everyone always does this - plays down the amount of work thing X is if they themselves have done thing X. From this perspective you are ineluctably led to a binary classification of hardess:

1. Things that have not been done are hard.

2. Things that have been done are easy.

Ok maybe but given the choice between implementing my own IR and using LLVM IR and going to the beach, I choose the latter.


Depending on your definition of “from scratch” I’d agree it isn’t all that hard.

I mean, if I can do it…

Just a couple days ago I was poking at one of my yak shaving projects which involves a generator for IR nodes as part of a backend for… something, haven’t quite decided where it’s going yet. Doesn’t really matter because it’s just the next logical step after the already written AST node generator.

This is just something I’m doing because I find it interesting and haven’t spent all that much time on it. All I know is if I want to lower AST to IR I need IR nodes and in order to know what functionality a IR generator needs I must have an IR to target.

As they say, you eat an elephant one byte at a time.


Yes.

My definition of hard is "I can't do it without leveling up," in whatever form leveling up may be, such as getting a PhD.

I can do it without leveling up, so it is not hard for me.

It just goes to show how much I hate C++ that I'll avoid LLVM specifically to avoid it.


"designing an IR" may or may not be hard - designing a close-to-optimal generic IR that can be sourced from many different languages and target many different backends, each with their own idiosyncrasies on both ends may not even be "hard" by some definition of hard (apparently one where a lot of work can still be easy), but then designing optimizations and transforms that practically work pretty good across all those hacks and edge cases, enough that major companies now base their toolchains on your optimizations? I'm pretty sure that's gotta be hard


Yes, that's hard. But the original comment was solely about IR design.

For me, I'm not worried about companies adopting my platform. I just want to not have a C++ dependency. Yes, I hate C++ that much.

So I'm not terribly worried about other languages. I am worried about many different backends, though.


Previous comment was about optimizations, though I can see how it was confusing. Someone said: "Question to pros: Things like constant propagation, loop unrolling, TCO, memory reuse etc., seem simple in theory. Yet why do languages end up depending on LLVM and not implement these algorithms on their own and avoid a monstrous dependency?" - response said optimizations are hard and recommended designing an IR specifically to optimize for it "pick any language that has a parser impl and design an IR for it and then optimize it (designing an IR from scratch can't be that hard can it?)"

I have a commercial product that's a development tool that get a huge chunk of value from targeting x64, arm/arm64 and wasm - I actually do like C++ but I don't like the massive bloat of LLVM - however just getting all these backends for free and how easy it was to get a 90% there thing going by writing parser -> IR and then having everything else just taken care of was worth it for me. Compilation speeds def hurt from LLVM dependency, but it's a tradeoff.


From this kind of response, it's always so hard to tell whether it's easy for you because you have the relevant experience, or you just think it's easy because you don't.

A (very non-exhaustive) list of interesting questions about IR design are:

* What's your model for delayed UB and UB materialization?

* What's your provenance model?

* What's your strategy for information retention and handling of flow-sensitive facts?

* What inherent canonicality is there?

* What's your approach to target or domain specific extensions?

* What is your strategy for formal verification of refinements?

Questions like "What instructions does your IR support?" are fairly uninteresting, and are not what IR design is (mostly) about.

It's worth noting that LLVM's own IR design doesn't have a very good answer to some of those questions either, in part because making changes to an IR that is as widely used as LLVM IR is hard (been there, done that). It's easier to design a new IR than to change an existing one -- however, unless you just want to reinvent past mistakes, it is certainly helpful to have deep familiarity with an existing IR design and its problems.


You are correct in that I could be merely thinking it's easy. So let me answer your questions as best I can.

* UB is not exposed to the compiler/IR on purpose. I don't want compilers using it as an excuse to be adversarial like today's compilers. There is still UB, but as little as I can get away with (mostly in data races and race conditions).

* I'm designing a new provenance model right now. Incomplete. If anything is hard, it's this. And it might be.

* It is possible to attach information to any item or group of items. In fact, my IR will be able to attach so much information, it should be possible for high-level passes to reconstruct high-level code. Think MLIR with pure data. For example, you could group basic blocks together and label tell as coming from a while loop. You will also be able to generate and use e-graphs. My model will also be different than LLVM's. Analysis will only happen on the original IR, with information generated only on the original. Thus, no information is lost before analysis.

* Canonicality is another one I'm working on and may be hard, though less hard than provenance because my IR uses basic block arguments instead of a phi node. But the basic idea would be that optimization happens differently, so canonicalization should be easier than in LLVM because I'm going to design it to be.

* User-specified instructions with a way of defining their semantics in code.

* I'm starting with e-graphs and going from there. Still working on this.

I agree with you that an instruction list is not interesting at all.

Anyway, yeah, you are right, but I think I have mostly satisfactory answers.


Is there a good resource you would recommend to get started understanding this as a hobby?


IR passes are meant to be easy to do. IR in, optimized or analyzed IR out. That is in fact the beauty of the LLVM architecture.

Writing an effective general purpose pass, that is unbounded hard.


I agree, but the original comment was about IR design, not optimization.


They're asking for ELI5, not saying "you all suck because this is obviously easy."


It turns out that many of the standard optimizations just don't work without a lot of extra work. For example, loop optimization algorithms don't take into account things like the loop induction variable changes may result in overflows.

It just takes a while to figure out how to account for signed/unsigned, integral promotions, overflows, etc.

Some optimizations also defeat other optimizations. For example, on the x86, various arithmetic operations can be combined into a single instruction. But common subexpression elimination can subvert it. It isn't always better to do common subexpression elimination.


The architecture specific backends are another significant part of the compiler, including architecture specific optimizations.

I suppose one of the reasons you might think of the llvm project as monstrous is because it's feature-rich.


What if I care about like 4 archs: x86/64, arm64 and PTX


You might be underestimating the intricacy of the CPU models LLVM uses.

If you want to see them in action, the same data drives llvm-mca[1], which given a loop body can tell you the throughput, latency, and microarchitectural bottlenecks (decoding, ports, dependencies, store forwarding, etc.)—if not always precisely, then still as well on average as, say, x86’s IACA, the tool written at Intel by people who presumably knew how those CPUs work, unlike LLVM contributors and the rest of us who can only guess and measure. This separately for Haswell, Sandy Bridge, Skylake, etc.; not “x86”.

Now, is this the best model you can get? Not exactly[2], but it’s close enough to not matter. Do we often need machine code optimized that finely? Perhaps not[3], and if you’re using generic distro binaries, you’re not getting it, either. (Unlike Facebook, Google, etc., who know precisely what their servers have inside, and who fund or contribute sizable portions of this optimization work.)

With that in mind you can shave at least a factor of ten off LLVM’s considerable bulk at the cost of 20—30% of performance[4,5]. But if you do want those as well, it seems that the complexity of LLVM is a fair price, or has the right order of magnitude at least.

(Frontend not included, C++ frontend required to bootstrap sold separately, at a similar markup compared to a C-only frontend with somewhat worse ergonomics.)

[1] https://llvm.org/docs/CommandGuide/llvm-mca.html

[2] https://www.uops.info/

[3] https://briancallahan.net/blog/20211010.html

[4] https://c9x.me/compile/

[5] https://drewdevault.com/talks/qbe.html


How much benefit do the models actually provide? Other than ISel my processor now has an enormous ROB, and a huge number of execution units, I've never really noticed anything super dramatic from compiler instruction scheduling.

MCA is basically useless for most programmers because it can't model the memory accesses i.e. cache performance.


Then you still have to do 4x the amount of work than you otherwise would. And when the next iteration of those chips become available, you will have to update your tooling yourself instead of get updated pipeline models for the cost of a source upgrade.


Abstracting over x64 and aarch64 will go reasonably well until the backend. Throwing ptx (or sass, or amdgpu) into the mix will make life much more difficult.


Do code size, instruction scheduling and register allocation matter to you?

Do you have an alternative to LLVM that only provides those four architectures? If not, you can constrain the scope of your LLVM build using -DLLVM_TARGETS_TO_BUILD.


Your backend will probably be much worse than the one LLVM provides.


Optimizer person here, who has implemented each of these at least 20 times each ;-)

The tradeoffs and engineering are quite complex.

The short answer is that the algorithms themselves are not the hard part. Compilers do not get better by magic algorithms most of the time, they get better by careful and hard tuning and testing (where are optimizations missed, what performance is lost or gained somewhere, etc).

They get better 0.01% at a time over 20 years. There are no magic bullets, only really hard work.

Those that try to implement these without LLVM discover it themselves, and either put in the work, or give up and realize it's better to reuse the work.

I mean, don't get me wrong - we spend plenty of time reducing complexity of algorithms, etc. The difference between a textbook algorithm and one implemented in a production compiler is often the difference between a fisher price cell phone and and iphone.

That isn't always true mind you (sparse constant prop is pretty simple in both cases), but it's often true.

But this is applied engineering.

For example, the SSA construction algorithm in LLVM is based on sreedhar and gao's linear time algorithm. The paper describes a mechanism that requires construction of separate data structures, is somewhat complex, etc. If you were to implement it straight out, it's pretty slow. Much slower than other mechanisms.

LLVM's version is simple, 200 lines of code, and faster than just about any other algorithm you will find on both small and large functions. Oh, it also handles liveness pruning and works to compute both forward and reverse iterated dominance frontiers.

See https://llvm.org/doxygen/GenericIteratedDominanceFrontier_8h...

Could language authors spend their time understanding the theory well enough to do this, reducing complexity, and engineering something that works as well. Sure, it's just software.

Is it a good use of their time? Probably not.

This did not come out of thin air like this either. It's based on 10+ years of people improving pieces of it, reducing complexity, reusing it elsewhere/etc. It's easy to look at it as having come this way fully formed, but it didn't ;) (in this particular case, even the in-llvm code history does not do it justice).

Someday, I hope LLVM is not really necessary, whether it's because we can run the sorts of complex/combined algorithms and not worry about it, or because AI is good enough at approximating optimizing pipelines or whatever.

But right now? if you want to compete on performance for real, you'd be hard pressed to do it.


You'd need to reimplement LLVM code generation and all. These optimizations only work in that part of the compilation process; do it on your own IR and after lowering to LLVM you'll still want to do it again.


Let’s say I’m targeting x86 or PTX or some standard subo instruction set. I can do vanilla codegen but it is slow af


But why would you when LLVM is there and will generate much better, much easier and very probably have fewer bugs than any custom one.


Education – learning to do something yourself – is a worthy end unto itself. Learning things is fun. Knowing how something works front-to-back feels good.

If you're making something that's meant to be load-bearing, something professional that will be used in production, then yeah, just use LLVM. But otherwise I think it's important people feel free to decide the goals and constraints of their hobbies and to pursue their preferences when it comes to their personal projects.


The original comment was about languages not something that was purely a hobby.


I wasn't replying to the original comment. I was replying under the branch of the comment tree where jimsimmons wrote

> Let’s say I’m targeting x86 or PTX or some standard subo instruction set. I can do vanilla codegen but it is slow af

and

> Because I want to be self hosted

which struck me as something a hobbyist would say but someone working on a production-grade language and compiler would not.


Because I want to be self hosted


Self-hosting llvm isn't all that hard.

But you might possibly have found that one weird corner case where it doesn't make sense to depend on a mighty, highly-sophisticated, industrial-strength, extremely well tested and supported, nicely mature, compiler tooling framework.

But that's a pretty big exception to most language implementers who get all of that and more for super-cheap compared to implementing it themselves.


Self hosted typically includes building the compiler in the language it compiles. E.g. my Ruby compiler is self hosted except for needing an assembler and linker (and I might get rid of that need too) and a minimal C library.

If you're writing a C++ compiler, self-hosting LLVM might not be a massive additional complexity. For any other language it's not an option if self hosting is one of your goals.


> Yet why do languages end up depending on LLVM and not implement these algorithms on their own and avoid a monstrous dependency?

That's like asking why construction companies rely on cranes, concrete and steel rebar when they can be using sticks, leaves and vine.

LLVM is so mature that it's hard to improve it. If you don't care about the performance of your languages code then fine, use leaves and sticks and moderate winds will knock down your shelter.


Ya I was thinking about an analogy as I was writing my other response - thinking you're gonna make your life easier by reinventing LLVM is like those people that think they're gonna make their lives easier by freeing themselves of the encumbrances of `society` (by moving off-grid to middle of nowhere). I believe they quickly come to realize that `society` was invented for a reason.


What is this horrendous website? Every 3 slides I get a full screen video ad. Download requires login.

If the creator is here: I recommend just sticking the pdf on github and then submit the raw link so we can read your good work!


https://en.wikipedia.org/wiki/SlideShare - launched in 2006, acquired by LinkedIn in 2012, then acquired from LinkedIn by Scribd in 2020.

It looks like Scribd have leaned hard into interstitial ads to try and monetize the 17+ years of content on there.


Videos and slides from the LLVM developer's meetings get posted onto the event site, though not immediately afterwards (as they have to get all the videos processed). If you're willing to wait a short while, the slides and presentation should eventually be uploaded and linked to from https://llvm.org/devmtg/2023-05/.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: