I mean the best practices on the bazel website include:
> To use fine-grained dependencies to allow parallelism and incrementality.
> To keep dependencies well-encapsulated.
Like if I need 5 source files in a library to keep it well encapsulated, I'm doing that instead of making 5 libraries that are a rat's nest of inter-dependencies. And like the headers and repeating all the deps and specific command line arguments and so on would be unreadable.
Then you aren't keeping your dependencies fine grained.
> Like if I need 5 source files in a library to keep it well encapsulated, I'm doing that instead of making 5 libraries that are a rat's nest of inter-dependencies.
If you can't form your dependency tree into a DAG, you have larger design issues. This is yet another thing that bazel does a good job of uncovering. Libraries with cyclic dependencies aren't well encapsulated, and you should refactor to remove that.
I recognize that at a small scale this doesn't matter. But to be frank, there are parts of my job that I literally would be unable to accomplish if my coworkers did what you suggest.
Sure I'll just go ahead and stick my fly weight class and it's provider class in two different libraries. And have one of them expose their shared internal header that is not meant for public usage. That's not going to cause any issues at all (sarcasm).
One source file per library and one source file per object is an example of two policies that will conflict here (and I'm not trading code organization for build organization when I can just as easily use the features intended for this situation).
Meanwhile in the real world limiting the accessibility of header files not meant for public use prevents people from depending on things they shouldn't. Organizing libraries on abstraction boundaries regardless of code organization allows for more flexible organization of code (e.g. for readability and documentation). And so on.
This is why these feature exists and why Google projects like both Tensorflow and Skia don't follow the practices you are espousing here pathologically.
> But to be frank, there are parts of my job that I literally would be unable to accomplish if my coworkers did what you suggest.
Then you are incompetent and bad at your job. To be blunt I would recommend firing an engineer who pathologically misused a build tool in ways that encourage hard to read code and difficult to document code, while also making the build bloated and more complicated, all in the name of barely existent (and not at all relevant) performance improvements. And then said they couldn't do their job unless everyone conformed to that myopic build pattern.
It's like what, an extra 4 characters to get the list of code files in a library from a bazel query? What in the world could you possibly be doing that having to iterate over five files rather than one makes your job impossible.
Bazel supports visibility declarations your private internal provider can be marked package, or even library-private.
> Tensorflow
Tensorflow is a perenial special case at Google. It's a great tool, but it's consistent disregard for internal development practices is costly. A cost I've had to pay personally before.
> Then you are incompetent and bad at your job
No, I just don't have the time or interest in reimplement language analysis tools when I don't need to.
But it doesn't actually have that. It has that for 1 very specific usecase, sometimes. Its very much not generic or intentionally built into bazel, and I am almost certain that there are more complex cases where that caching will break down (for example when the various deps import each other, or as mentioned: tests). Especially when its easier to have tooling automatically manage build files for you, so you don't even have to do it by hand!
> But it doesn't actually have that. It has that for 1 very specific usecase, sometimes.
I mean not really, it's there enough that for all practical cases it exists effectively. I would be extremely surprised if that wasn't intentional when any other C++ build tool makes use of it extensively. I would have laughed bazel out of the room if it didn't make use of the fact that nearly all C++ compilers provide nicely formatted lists of the files a given file depends on.
Also it does work for tests perfectly fine.
> Especially when its easier to have tooling automatically manage build files for you, so you don't even have to do it by hand!
And this managing build file tooling is public? Because I am not at all aware of any automatic tools for bazel along those lines.
Which really is my frustration with bazel's google developers, their views are eternally myopic about how other people use their tools (e.g. using C++17 properly means re-writing the whole toolchain from scratch, have fun!). Yet those views are based on a closed and mostly unpublished ecosystem.
Let alone how terrible bazel does with dynamic libraries on windows (each cc_library then outputs a dll! yay!).
I don't think you realize how for anyone outside of Google bazel's tagline might as well be "The best terrible option".
> Well sure, but build systems are hard, so that's still success in my book.
Sure, but suggesting the one true way to use the tool is a way that only works inside of google's ecosystem is myopic. I don't actually have complaints with bazel directly. I have issues with the googlers who continue to insist on using it on a way that only works inside of google to others and then act surprised when those people don't use it at all (or use it wildly incorrectly, or complain about the tool being poorly thought out, and so on).
What you suggest runs counter to many of the things that makes Bazel useful for anyone outside of google. I would swap back to CMAKE, by hand makefiles, hell I would use shell scripts, before I use it the way you suggest.
> No, but neither is the language analysis stuff you suggest I build instead
I didn't say you specifically should go build that. I was saying it's possible with bazel, any language that doesn't have those features is not a mature programming language, and if bazel is going to be used with that language it should probably make use of those features (or accept that it's not a mature language). If C++ and javascript can provide reflection than so can whatever bespoke language you are describing.
> and claim I'm incompetent if I don't do.
No I claimed you are incompetent because you claimed you couldn't do you job if a `.lib` file contained the output from more than one source file.
> No I claimed you are incompetent because you can't deal with a `.lib` file that contains the output from more than one source file.
None of the work I do cares about the actual output artifacts. I care, almost exclusively, about the representation of the build graph itself. That work cannot be done if I don't have a build graph to analyze, which I don't if you do things like glob files together.
I've seen the simple act of untangling a globbed dependency to the various isolated files reduce the size of an output artifact by gigabytes.
> any language that doesn't have those features is not a mature programming language
You're calling literally every language that doesn't use the c-linker an immature language. Which may include c++17-with-modules, though I don't know that for certain.
> javascript
Javascript doesn't do what you're talking about though. At least not if you do any fancy precompilation/tree-shaking to your js at compile time. If all you're doing is copying the files, then sure, but as soon as you want to do useful things like, for example, run a typescript or closure analysis on your files, or generate js from typescript, or compile and minify your js, it will get a whole heck of a lot faster when you use more isolated build targets[0]. I know because our JS teams went to a lot of work to make js builds faster by taking advantage of this information.
> I've seen the simple act of untangling a globbed dependency to the various isolated files reduce the size of an output artifact by gigabytes.
And I don't take issue with what are obviously exceptional cases. What you are describing is not "I put togeather 5 related files" it's like "I globbed a file I wasn't supposed to".
> You're calling literally every language that doesn't use the c-linker an immature language. Which may include c++17-with-modules, though I don't know that for certain.
Uhhhh, I don't think you understand how the feature in question functions. I am simply talking about the -M option family of gcc (and similar commands for other compilers). Which provides a dependency tree for C++ files.
> At least not if you do any fancy precompilation/tree-shaking to your js at compile time.
I think it's actually the opposite. Those are the same tools that provide the reflection features in question (or some other AST parsing library)... then you write a two line lambda and feed that in over an interpreter. Boom list of files.
> To use fine-grained dependencies to allow parallelism and incrementality.
> To keep dependencies well-encapsulated.
Like if I need 5 source files in a library to keep it well encapsulated, I'm doing that instead of making 5 libraries that are a rat's nest of inter-dependencies. And like the headers and repeating all the deps and specific command line arguments and so on would be unreadable.