The proposed feature is great, but the unwillingness of the Go team to use a separate, clearly defined project file or at the very least a separate syntax in your code file leads them to stuff every additional feature into comments, a space shared by human notetaking.
Let's have a look:
* Build constraints (// +build linux)
* Code generation (//go:generate <command> <arguments>)
* Cgo, you can even stuff entire C programs in the comments (// #include <stdio.h>)
* Cgo flags (// #cgo CFLAGS: -DPNG_DEBUG=1)
* and now this, file embedding (//go:embed html/index.html)
Most novices would assume the commented out code does nothing, and rightly so in my opinion.
Half of these features aren't even code-file specific but project-wide, making deciding which file to put them in hard and looking them up even harder.
You forgot documentation (which actually annoys me more, because I can't put comments to developers of the function right before the function, because it will be interpreted as documentation to users of the function).
I mean yeah, I think the comment thing is a bit wonky, and it's not the way I'd do it. But once you've got "//go:generate", adding "//go:embed" isn't really any worse.
EDIT: Just noticed this line in the proposal:
> Another explicit goal is to avoid a language change. To us, embedding static assets seems like a tooling issue, not a language issue. Avoiding a language change also means we avoid the need to update the many tools that process Go code, among them goimports, gopls, and staticcheck.
So this partly explains the idea of putting things in comments: things classified as "tooling issues" are put there so that "language tools" aren't affected. I do agree it probably would have been better to invent a specific language construct for, "This is a tooling issue, please ignore"; a bit like the # prefix in C.
And I would argue that it's bull. Once they realised that they were introducing pragmas / attributes into the language, they should have bitten the bullet and actually introduced a clean way to annotate language items.
The first and second ones are excusable as "we'll only need one" and "well it's not worth the hassle" but at the third one it's not a special case it's a pattern. Especially as the excuse that "other implementations can ignore those" gets less and less true: a compiler which ignores go:embed can not be considered working.
Wow, the lexer function is even named "pragmaValue". So Go does have a concept of pragma, not just magic comments. Go developers' stubbornness on forward compatibility is beyond me.
This is essentially embedding bits and pieces of a makefile into source code as ad hoc comments. Which is what happens when you don't have a proper, general-purpose build system.
I’m sure this comment won’t be popular, but this is making a mountain out of a molehill. I particularly like Go’s build system—specifically that it isn’t some disaster that requires learning a one-off DSL (a la CMake, Gradle, Make, etc) just to enumerate dependencies. It’s clearly not perfect, but it’s always seemed pretty reasonable to use a general purpose build tool (like Bazel or Nix or Make) to call into the Go tool for more complicated builds.
The one off DSL exists. It’s worse because you may think it doesn’t but it does: it’s just in the comments. The fact that COMMENTS have side effects that affect how your code is built is so backwards I don’t get the justification.
The comment interface is minimal, easily understood, and affects only a small minority of projects because it is explicitly for advanced features. Contrast this with CMake, etc where you need to use the DSL for even single-source-file builds with zero dependencies. Then you have multi file builds which involve more complex DSL scripts, and then you have builds that need to pull dependencies which are more complex yet. This is all supported by the Go tool out of the box without any DSL whatsoever. Then you have things like linking to a C project which is just a comment in Go, but in DSL-based languages it’s even more scripting. This is all strictly better than having to deal with a DSL, especially from day 0.
Using comments (or “COMMENTS”) for these directives is sorta not ideal but if anyone could produce a case study where it causes anything but the most trivial of problems, I would be shocked. This seems like textbook overreaction.
Maybe these could even be come kind of code that you can write yourself and define your own pragmas and code transformations by passing some lib folder to the compiler with prebuilt compiler extensions... Ah well.
That's right, if you don't have a newline between the build comment and the package, it gets ignored as a package doc comment, not a build tag. So that comment is treated as a comment, but if there's a newline it isn't.
If we had a normal pragma, "go build" could error out instead, but we don't.
Similarly, if you typo
//go:embd file
go build will ignore that as a comment. If it were '#embd', the tooling could error out with "unrecognized pragma".
I think that's the main argument for not using comments.
#pragma C {
#include <stdio.h>
int foo(var * bar) {
...
}
Or having documentation look like this:
// FIXME: This is kind of a hack, and we should
// get rid of it before v1.0
#pragma docs {
The FooBar function will Foo all the Bars passed to it.
}
func FooBar(...) {
You could even imagine modifying your editor to do proper syntax highlighting on the C code.
Ah, well -- water under the bridge at this point. I like Go as a language overall. Most of their experiments have worked out pretty well, I think; it's inevitable that some of your experiments don't work out the way you expect, and then you're stuck with them.
I'm not sure about using this syntax for embedded file, but in general I see their point:
These annotations are not go code but specific to this go source file.
The latter means that embedding them in the go source file makes them easy to find and read =, and avoid multiplying configuration files.
From there the former (adding annotations in comments) is basically the simpler, and perhaps only, way to embed the annotations while make sure the file is still valid go code.
That’s showing some real support to the community IMO.
There are plenty of purists that argue both for and against the concept but one cannot deny it fits very well in the sales pitch for Go. You can have a simple tool chain to ship a binary for any number of systems and that support for embedded assets would be a first class citizen.
I would also love to see them address more robust plugin options or officially adopt / endorse the pattern HashiCorp uses as they’re doing here with the bindata prior art.
Go is not a purist language. It's pretty relentlessly designed for maximum productivity with minimum complexity. This sort of feature is right in line with its mission.
I'm a dev for more than 10 years. Much Python, C#, Java, JS/TS, Kotlin, a bit of Rust and was recently forced to use Go.
Writing out source code by hand: Nothing beats TS or Kotlin. Not even Python.
But what surprised me with Go after a couple of month (and please, nobody should judge a language until having done a serious project): Taking everything into account, from installing to looking for libs, writing code, testing, compiling, revisiting code after a couple of month ... Go is an absolute dream in terms of overall efficiency.
Writing out error-checking or for-loops is a one shortcut in GoLand. Getting a table-test template for a function as well. People complaining about having to type this stuff out should learn a proper editor.
Whilst I agree with your comment, I would like to point out / express my personal opinion that having to rely on your editor to churn out boilerplate like error checking is a smell; it reminds me of ye olde Java where Eclipse could generate hashCode and equals. Later on I always used Lombok, and if I were to work in Java again I'd push hard for Kotlin, which has stuff like that built in. I know Scala does as well but it's a can of worms, it has too many options and no two developers will write the same code.
I've never needed to have an editor churn out Go error handling, and I don't find it that hard. It could be better, but not much better.
Exceptions are not less complex. They just move the complexity around. Exceptions as typically implemented also don't play well with event driven or concurrent models.
The thing I actually like about Go errors is that it makes you think about them right when they happen, while an exception encourages dealing with errors "later" which often means "never" or "as an afterthought."
Go kind of has exceptions, but panics are intended only for very extreme cases like out of memory errors that crash most applications and are often not recoverable. Using panics for non-fatal errors is bad Go code.
One reason I think people don't get Go is that it occupies a language niche that formerly was not occupied by any language. It's like Python meets C, a "low level scripting language." It's designed to be productive and pragmatic but fast and compiled and capable of dealing with pointers or even embedded ASM.
I want learning to pay off in clarity. I don't want a language in which the best developers have no choice but to write the same bloated boilerplate that the worst developers would have written.
On codegen, I've never seen an IDE that will help everyone read what it generates. If we must generate code, do it during the build and then throw it away, don't make it something to maintain and review.
Apart from the, I feel, quite pretentious and lazy Rob Pike quote. The productivity in this case is probably referring to the initial period rather than the steady state afterwards (so to speak).
Go is a fairly horrible language by today's standards, but it does what it's supposed to. I like generating code automatically at compile time based on, but I can appreciate the appeal of something that forces you to actually do work rather than being clever.
There’s also a lot outside of the language—tooling, ecosystem, etc that Go absolutely nails; however, these things are tremendously undervalued. You have people in this thread talking about how awful Go is for using “//go:” instead of #pragma as though this sort of concern dominated the software development process.
Amen to this. I don't remember the source, but I read somewhere that Google's internal 'killer feature' is indeed the low barrier of entry. You can get a new intern working with it in literally a week. It's also good enough for all employees up to the very senior people, so you get a good coverage of people with the same tool, which is a tremendous benefit in team cooperation.
If it's the latter... I can't personally recommend any beginner's textbooks which are written to use go, but it might be worth looking at Head First Go, the rest of the books in the series are quite approachable.
What would you say is the best starter language these days? Java? Python? Something else like C++ or Rust? I am not a coder at all. I mangle configurations and implement functionality upon request, but I always run up against scripts and CLI and I am no expert in bash or powershell and I don’t have to quit vim. But I get by okay. Not a dig, I just admit I only know as much as I do primarily from self-learning and doing. That’s why I care to learn what I’m not seeing because I’m not able to do it yet: the code.
There’s just so much to not know, especially about setups and small things that impart outsize benefits or functionality, like dotfiles for instance. All these things are so hard to learn in isolation. It’s hard to have scope and find the rails under you to know how to turn and move yourself around in the dev space, in a basic computing, nuts and bolts sense, if that makes any sense. So if you have any tips about things like that, let me know. For example:
Python is very commonly recommended as the "gateway drug" to programming, and for good reason. It's extremely well used, sought after, and easy to get into.
When you feel comfortable with programming fundamentals in Python (everything from variables, functions, loops to classes and modules) I'd recommend looking into Go or Rust -- the latter is probably more of a challenge but it makes sense as a step up from Python.
My extremely personal opinions about the rest of the languages you mention, and these are my personal opinions so there's no need to tell me how wrong I am because personal opinions almost always are wrong (to someone else):
- Java: Highly sought after in the market but mostly found in legacy code bases, bloated (JVM) and not very well liked by developers apart from some scenarios where it's already used, or those that have been working with it for a decade.
- C++: Hated by everyone except those that are already very proficient in it and is quickly being surpassed by Rust, by no means a useless language but not one that makes sense to pick up from scratch in 2020 unless you have a specific reason to.
C++ is still used extensively, even for completely new projects, in computer graphics, databases, etc. Relatively new projects I’ve seen it used: TileDB, tensorstore, core Apache Arrow, etc. CUDA programming is C++ at the bleeding edge. I’m in the process of updating my C++ to C++17 and there’s definitely been a lot of changes over time in the language. I would agree that for many, they’d go with python and let others accelerate that python with underlying C/C++ they wouldn’t have to see.
The problems with Python are performance and tooling, especially dependency management and deployment. Figuring out the happy path is highly dependent on your particular circumstances. Someone might tell you that Pipenv worked well for them, but when you try it, every single pipenv operation takes 30 minutes. Others tell you that vanilla pip works for them, but you care about reproducibility and they usually respond with some variation of “reproducibility is unnecessary”. Similarly, if you want to deploy statically linked artifacts, you pretty much have to use Docker or one of the zip file solutions (the latter are nice, but they don’t bundle the runtime or .so files—at least in some cases) and these always end up being 10-100 times bigger than an equivalent Go binary (for non-toy programs, anyway) which is a big problem for things like AWS Lambda which has a 250MB limit on binaries. Python the language is easy enough, but all of the things surrounding the language are brutal and for whatever reason we handwave away those concerns as though they aren’t important.
I’m more into applications of technology, and security is a focus. For instance, doesn’t meterpreter use Ruby? That didn’t make the list, but I couldn’t say why. Maybe it’s not as easy to learn? I’m just grasping at this point.
Would you say it’s worth looking into C++ due to legacy codebases?
Sure, if you want to understand stuff written in C/C++ then that's an obvious use case for learning it. The same goes for any language. Ruby isn't worse than Python by any stretch but my personal view is that it got really popular due to "Ruby on Rails", on its own it's very similar to Python which seems more widely used.
If security is what you mainly care about though, maybe you'd just enjoy diving straight into rust rather than C++.
I'll +1 on Python, a lot of languages are a bit of a pain to set up as well (I'm thinking back on setting up a LAMP stack for the first time, shudder ). It's a great general purpose language and often used as an alternative to churning out Bash scripts, and Bash is IMO too difficult for most practical applications.
Isn’t Docker supposed to help with this? Is there a LAMP reference implementation or VM, or a script to init a LAMP environment, if I’m even saying this right. I feel like such a noob sometimes and my first PC ran DOS a long time ago. That’s probably connected, although I also ran Linux, just not for daily driving. Computers are endless amusement to me in that way. There’s always more to learn or do, or others to talk to about what they are learning and doing. It’s great.
Ah, thanks. I was actually worried that it was a different, similarly named language in the same space, and the existential dread of trying to untie that Gordian knot via Google itself left me content to just make a mental note, and ask those who do know when it came up next.
That depends on how you define "productivity" - is it how quickly an individual can solve a defined use-case on a timescale of a few days or how well a team, division or company can solve the overall problem.
As a former Perl wrangler, I was super-productive in the short term with terse, elegant, and often clever Perl code: I could really express myself. Maintaining the codebase on the long-term was a nightmare though, so team productivity wasn't great. Go is the opposite, "harder" to write, but much easier to read and understand - which pushes up team productivity significantly.
This is strange to me. I’ve always found Go to just click, and I can write code very quickly. I used to work at a Python shop, and I would prototype in Go to make sure things were reasonably correct before porting to Python (and then to watch the performance, type safety, and a good chunk of the maintainability evaporate, to my great dismay). I wonder if this comes down to differences in how programmers think about programming?
I've coded in C, C++, Ruby, JavaScript, Java, C#, PHP, and messed with Rust and Haskell a little. Go is by far the most productive compiled language I've used, though I've certainly had moments with JS and Ruby that felt more productive.
For me the reason is that the language imposes little cognitive load, leaving my brain free to spend almost 100% of its energy thinking about the problem I am attempting to solve.
With complex languages like C++ I find that I'm spending too much time thinking about language internals and syntax, and with C I am spending too much time doing super low level things. Both these languages also require constant attention to make sure their many built-in footguns do not go off.
Scripting languages like JS and Ruby can be more productive up front, but as a code base matures it becomes harder and harder to feel confident about the code not having hidden runtime bugs. Dynamic runtime typing is technical debt. These languages also run slower, though I have to say JS VMs are impressive.
I have not used Rust enough to comment. Feels like a better C++. Haskell feels like it would be great for certain areas where the provability it offers shines but isn't quite general purpose.
But YMMV. Ultimately I care more about what is written than the language. 'Tis a poor craftsman who blames his tools.
Depends on the situation. Python 3 for anything simple, but the way I go about coding is, I first define the data structures and types for a given module - its inputs and outputs, first of all - and only fill in the code afterwards.
Not unexpectedly, this means Rust is one of my favorite languages. Anything with a poor type system makes this approach unbearable, and even Python has better static typing than go.
In Python you don't know what you receive and someone could have modified something in an object somewhere in the code that will break down the line in unexpected ways.
Python does not have any type checking those are just annotation it does not enforce anything.
"Note The Python runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers, IDEs, linters, etc."
You're looking at it from your personal, one-person point of view, but it was made by and for Google, who struggle with tens, hundreds of millions of LOC and thousands of developers whose time is often spent on waiting. There was a post yesterday (I believe) about just that, how a developer in a FAANG managed to do a few tasks at best per week.
How fast you can churn out code in a language becomes completely irrelevant at those scales. How fast you can get to grips with a codebase, make the change, compile it, test it, and have it reviewed is much more important.
And that's where Go comes in; it doesn't have much cleverness so most code is instantly readable. Take a random file from https://github.com/golang/go/tree/master/src and any developer will understand it and be able to make changes. They optimized the language for compile speed (https://stackoverflow.com/a/8673468/204840); the joke goes that they came up with Go while waiting for a compile. And because the code is so simple and standardized, reviewing also takes less time.
I'm mostly thinking of Scala as the direct opposite of Scala which has as many different coding styles as it has developers, and Go's language design process as rebelling against most other languages that seem to want to put other language features in them for the sake of it (I'm still bitter about classes in Javascript and streams in Java).
TL;DR: If you're a solo developer or work in a small team, then Go may not be for you if you're measuring personal productivity. Its value starts to show if you scale up, and you go from churning out code to maintenance mode.
> Go's language design process as rebelling against most other languages that seem to want to put other language features in them for the sake of it
Rob Pike in 2015, about what Go was trying to avoid:
"Java, JavaScript (ECMAScript), Typescript, C#, C++, Hack (PHP), and more [...] actively borrow features from one another. They are converging into a single huge language." [0]
I love this idea of making executables contain assets, and it fits well with the Go ethos of a single (statically built) executable. This makes distribution simpler for applications that do need to deal with assets of different kinds.
A tangential comment: is anyone else concerned a bit that discussions and debates happen on platforms like GitHub and reddit? These two platforms are large enough to be around for quite sometime, but are we making it easier to lose historical context because platform creators/designers are choosing third party platforms that they don’t host or control (or in some instances don’t or can’t pay for)?
It’s concerning that the debate that formed and forms open source is not itself available on open source platforms and/or archived and licensed and available for access in the same manner and license as the code itself. Good spot, quite apropos!
Yes, but at the same time, it'll reach a much bigger audience than mailing lists (which IIRC is still the official communications platform for Go), which are still painful to work with, even with using Google's own tooling. That said, mailing lists are much more open and easily archived for posterity, moreso than Reddit.
Seems pretty complicated compared to Rust's `include_bytes!("path/to/file")`. Sounds like it might have a compile time advantage though, seeing as it produces a seperate package.
I know it's been mentioned before, but I really feel there is a need to codify an 'internet lore law' about how many comments can appear on a Go themed HN post before someone brings up how superior Rust is
Something like:
Pikes law of unfavourable comparison
OR
HN Golang oxidation rate
I suspect that, programming in Rust is so onerous, that Rust programmers must take lots of breaks and troll Go threads so they feel less bad about themselves.
Or maybe they're just waiting for something to compile?
To be fair, I remember thinking the same thing when Go was new. HN was pretty dominated by everyone apparently switching to Go (if you were to believe the general attitude anyway) and how Go was the clear path in the future
Not that it is or isn’t and clearly Go does things really well in some problem spaces but yeah, this seems to be a common theme for all young languages
I do think some members of the Rust community Take it waaay too personally, which I never saw in general when Go was the hip thing. That I agree can be very obnoxious
Well it's pretty relevant when considering a new feature for a language to consider prior art is similar languages (the key similarity in this case being the compilation model), no?
Your comment itself should be codified. Maybe it’s just another growing language with a large community that itself is dedicated to improving the language. Rust is newish, and changing rapidly so it’s a good candidate for comparisons between modern languages. It helps that it’s designed completely in the open so the sauce can be seen.
Not to pick on you or even the other child comments but the amount of people that complain about a language being compared to a modern equivalent is funny. What about every other thread on HN where the same thing happens? It’s alll turtles just enjoy the discussion
Nah, there's something unique about Rust zealots in this respect.
nicoburns comment is completely off topic, and it's extremely obnoxious to see this kind of thing over and over again from the Rust community. And I even like the language!
The Rust promotion has unfortunately changed lately. I've noticed too that they not only tell everyone how great Rust is (which is fine) but started bashing other languages (which is sad and not doing any good for the public image of Rust).
Rust is from enthusiasts for enthusiasts. If the Rust community wants it to become more real / main stream, they need to look at how the Go team focuses on supporting devs for getting things done correctly with long term stability.
If you prefer I can provide examples on how Windows, macOS/iOS, Android resources work for their native language SDKs, or how Java and .NET embedded resources work.
The Rust community does have an uptight attitude, as if Rust's success or lack thereof is a matter of life and death. This can be seen in general with languages which haven't achieved wide-spread success yet.
Forward slashes work on Windows systems for most [1] paths. You can write `include_bytes!("foo/bar.txt")` in Rust and be very confident it'll compile correctly for most of your users.
[1]: "Most" because they don't work for namespaced paths, by design. But note that the number of times you'll encounter those is limited, the Rust standard library doesn't handle them correctly in general, and a lot of other non-Rust software breaks when given them.
Windows path handling is a horrid mess. It basically takes your input and tries to turn it into a valid NT file path, that also includes handling special device names like COM1. At some point this was limited to 256 chars, so you had to input a valid NT path yourself if you wanted it longer, however I think Microsoft removed that limit a few years ago.
This proposal is more powerful than that one rust macro, but rust's abilities around embedding files are much more powerful than go's approach.
This proposal allows "go build" to embed things in a very specific way, but it's not meant to be extensible.
Rust's 'include_bytes!' macro on the other hand is a macro in the stdlib that can be emulated in an external library. I'm fairly sure every feature of go's embed proposal could be implemented via a rust macro outside the stdlib.
For a specific example, I had a project where I wanted to serve the project's source code as a tarball (to comply with the AGPL license of the project). I was able to write a rust macro that made this as easy as effectively "SOURCE_TARBALL = include_repo!()" [0] to embed the output of 'git archive' in my binary.
Of course, there's a very conscious tradeoff being made here. In rust, "cargo build" allows arbitrary execution of code for any dependency (trivially via build.rs), while in go, "go build" is meant to be a safe operation with minimal extensibility, side effects, or slowdowns.
There is a problem with Rust's "anything goes" approach though - it makes it really difficult to know the inputs to compilation. That makes build systems, IDEs, sccache etc. way less robust.
- go tooling (ides, etc) have to be taught about the _specific_ embedding in the same way one could teach rust tooling about specifically `include_bytes()` (or any other specific macro in the same way one teaches go tooling to handle specific pragmas)
In the world of rust build scripts, there is tooling that exposes information about which files are used if dependency info is all that is required (I don't know to what extent imperative macros are able to expose similar info).
The core of how I see the comparison here: if we restrict ourselves to the capabilities of go pragmas in rust, the same level of support is possible, but even without that restriction there are ways to obtain (though with more work) the same info.
> Of course, there's a very conscious tradeoff being made here. In rust, "cargo build" allows arbitrary execution of code for any dependency (trivially via build.rs), while in go, "go build" is meant to be a safe operation with minimal extensibility, side effects, or slowdowns.
I've been working off and on on a language that tries to get the best of both worlds to some extent. The whole language is built around making sandboxing code natural and composable. Like Rust, it has a macro system, so lots of compile time logic is possible without adding complexity to the build system, but macros don't have access to anything but the AST you give them, so they are safe to execute. There's a built in embed keyword that works like Rust's include_bytes, which runs before macro expansion, which you can use to feed the contents external files to macros for processing. At some point I'll probably add a variant that lets you pass whole directory trees.
Accessing nil pointers is well thought out. Not enforcing error checking is well thought out. Reflection and code generators instead of generics and traits is well thought out. Lack of conditional compilation is well thought out. No compilation with unused variables instead of warnings is well thought out. Doing your own sort, map, all essential array operations from scratch everytime is well thought out. Spamming err != nil instead of some operator like `?` is well thought out. iota instead of type-safe Enums is well thought out. No file-scope variables is well thought out. go mod is well thought out compared to npm and cargo.
Go doesn’t have to be bad for all those things you like in Scala/Rust/etc to be true.
It’s a simple language with simple, explicit, verbose, tedious, pedantic patterns. The communication resulting from those patterns is where the value lies for a lot of developers and teams.
Before using it I thought the if err != Nil would be annoying, but it actually isn't. I think a sum type would be better, but the pattern is common and honestly not very different from a match on a sum type.
In general I agree that it does lead to code that's more familiar even if it was written by someone else. Having only used it a few months now, I can definitely see where the value in go lies.
disagreeing is a pretty strong word. Nobody can disagree these days. I was flagged and labeled as a troll within a couple of minutes once I criticized a programming language. 30 years from now, people will remember this pathetic mob tyranny era when one couldn't even offend a programming language feelings!
It's a consequence of go's simplicity by design. You can't actually destroy natural complexity; only move it around. And if you're unwilling to design a proper container around that slowly increasing complexity, it just squeezes out between the cracks in odd ways.
In go's case, it's manifest primarily through magic (magic files, directories, comments, names, env vars, switches, etc that you just have to know how to invoke). Another warning sign is the need for an external build tool like makefiles in order to build-in-one-command because you need extra steps beyond "go build" (such as "go generate").
I kinda agree; I get that they don't want to change the language definition itself, and by putting things in comments they don't need to do any change in the compiler, but it makes things feel unsafe, or "stringly typed" if you will.
IIRC Java eventually "solved" it by adding annotations as a language feature. I'm not saying Go should add annotations, but just sharing some language history.
And C / C++ has had compiler directives (e.g. #define) since forever, although there too (I believe) # is "just" a comment.
I wrote one of the listed tools (github.com/mjibson/esc) and am thrilled about this proposal. I think it's great and solves all the problems in a great way.
I wrote an unlisted tool too, and I am also a fan of this proposal.
The fact that there are so many of these "embed file in binary" tools suggests that it really is a problem that could be usefully solved once, in a consistent and reliable fashion.
I think it's a well thought out design, particularly the implementation of some common Go interfaces (such as fs.FS) that will allow this to be transparently used with existing Go code.
The only thing I'd want is to allow environment variables in the "go:embed" statements, while my assets might be in the same repo and thus relative, they may also be in a different asset repo (if I'm using git I could use git submodule, but if I'm using something else that might not be possible).
An interesting option that I've not seen proposed yet would be to accept go:embed comments before importing a package (obviating the need to write the assets package in my previous example).
So, assuming your assets are in the repository github.com/me/myproject/assets, you could just do:
And the go tool would take charge of generating an assets package with assets.Files.
Then we don't even need a magic embed package at all. And generating a package would give us more flexibility, for example:
package foo // import "foo.io/foo"
//go:embed Name string "name.txt"
//go:embed Data []byte "bin.dat"
//go:embed "images" "templates" "html/index.html"
import embedded "foo.io/foo"
//use embedded.Name, with type string
//use embedded.Data, with type []byte
//use embedded.Files, with type fs.FS
My former coworker Miki Tebeka wrote nrcs[0], this was specifically for static files - css, JS, png etc - for shipping a self contained web server binary. I still use it now and then because it is well designed and does one thing and that one thing really well.
Store SQL scripts as embedded resources rather than string constants. When SQL scripts are stored as individual ".sql" files, both version control and syntax highlighting is better.
This works with most editors and doesn't require fancy multi-language syntax highlighting support. When debugging complex scripts that involve many joins or CTEs having good highlighting can prevent a lot of headaches.
I’d prefer a library that uses the IANA tzdb directly - so I can download the latest tzdb files for my application and OS without needing to wait for the maintainer of that library to ship an update.
They used time zones to illustrate a point. Of course it’s generally best to use that file instead for that specific use case. Now imagine some data that isn’t shipped with Linux by default. That’s the point they were trying to make.
Yes! I can't count how many people have tried to use my software (https://github.com/mickael-kerjean/filestash), flushing some entire directories with docker volume and creating support tickets and emails as things got broken.
The landscape of tools supposed to solve that problem isn't great. Things are changing way too fast with projects being archived or deprecated with mention to migrate to another solution which after another year is deprecated again. Because of those issue, I've went with go generate which isn't ideal but is at least stable. That proposal would be a game changer and hopefully it will become a reality as Brad is quite a prominent figure in the Golang world.
Thanks, first I thought... it cannot be, because when you delete the container, the named volumes are still there; but I guess that they might have deleted contents using the app itself, without realizing they were sharing the directory...
Sometimes I write tiny services with a web interface that I can just distribute through scp'ing the binary. In this case I can embed the favicon and other tiny files directly into the binary without making the deployment more complicated.
Consider the reverse question: it's simpler to just ship files in the binary unless you need one of the specific advantages of loose-leaf files (like the user being able to modify them, other processes like Nginx reading them).
If you don't need one of the features of having "moving parts" on the filesystem, then why would you want a dependency on the local filesystem?
I maintain an open source project written in golang that's test runner with an html reporting plugin.
I can use this feature to bundle the report template, css, js, images file with the plugin to make download installation and use of the plugin much easier.
I don't know what platform you had in mind, but this isn't generally true for Linux. No part of a Linux executable is in memory except the page containing the main entry point, initially. You'd have to pre-fault the sections of interest and protect them with mlock to make sure they stay in memory. If you don't their cached pages may be dropped, which is essentially the same thing as swapping.
In short, these embedded assets are no more likely to be in memory than any other file-backed data. If you want to guarantee they are in memory, it is up to you to make that happen.
Regardless of the bike-shedding over the exact syntax, this would be really cool to see. I have no clue how C/C++ made it into 2020 without this language feature - and yes, I know about incbin and xxd, and Go having it ahead of those languages would be ironic given how new and conservative of a language it is.
While I agree that "not everything needs to be in std lib", having worked with other languages that support embedding out of the box, let me tell you it's handy.
It also seems like an area where it'd be nice for there to be a single uniform way to do it.
For runtime, you can certainly read any arbitrary binary file in C without problems.
For compile time, I make heavy use of xxd -i to generate header files in a range of projects. It is no longer installed by default on most +nix systems, but it is generally still in your package manager (often installed alongside vim), and has been around since 1990.
This is funny to me to see on HN tonight because I spent a bunch of time packaging a Go app on Sandstorm this evening, and one of the listed packages apparently was pitching a fit about some static assets not being pulled into the Sandstorm package correctly...
I resolved it, checked HN, and saw this was a peeve they wanted to solve. Hah.
I would love if gob encoded files were supported natively with some special sugar syntax.
I have a project I work on where I need to package a bunch of data that won’t be available locally. Given that I’m loading data, it seems most optimal to just package it as encoded go data from the get-go.
Right now I’m using go-bindata to embed gob files.
Go started being a Java 1.0, instead of learning what a modern language should be like, so every missing feature gets eventually added with a couple of hacks.
Hopefully by then they should be available, we will be discussing how they are unsound and how it was a mistake to add them in like that, while everyone will be discussing this new language that is going to be so much better and fix all the problems.
It looks like this is just for static data files. To run a piece of memory as executable you'd have to explictly mark the page as executable with e.g. mprotect(2) on Linux. I guess there's nothing stopping you from doing that from Go also.
You still need an ELF loader, unless you're shipping shellcode. As far as I know, there's currently no great way to load an arbitrary segment of memory as an ELF, without doing some sort of copy (eg. memfd_create, write, exec).
> No mention of go-bindata which was one of the originals :(
There’s literally an entire section in the appendix dedicated to `go-bindata` and explaining what it generates. It was also first in the list of libraries mentioned.
I also disagree with you entirely. The document does a great job explaining the problems with having this “solved by a library”. I’m excited about this draft design.
I’m aware there is demand for this sort of thing but I’d suggest that almost nobody actually needs it. This strikes me as a feature in search of a problem – especially considering how many of the examples in this document relate to static web assets.
If you’re considering bundling static assets into your binary for a service, you would almost certainly be better served by containerizing your service and copying those assets into the image.
I disagree, one amazing aspect of Go is the static binary that can easily be distributed. I embed templates and other text files in my binaries and it makes the installation so simple and without extra dependencies or steps. Even though I'm a huge proponent of containers I don't think it is needed for things like CLI tools.
When I see all the CLI tools that require NPM / PIP to install hundreds of dependencies I am quite amazed when one requires a single binary. I'm not saying it's a unique feature with Go but that it is nonetheless a feature.
What amazes me is that installing hundreds of dependencies is acceptable, and that modern generations don't get that static compilation goes back to the first compiler, while dynamic compilation only went mainstream around mid-90's.
The proposed feature is great, but the unwillingness of the Go team to use a separate, clearly defined project file or at the very least a separate syntax in your code file leads them to stuff every additional feature into comments, a space shared by human notetaking.
Let's have a look: * Build constraints (// +build linux) * Code generation (//go:generate <command> <arguments>) * Cgo, you can even stuff entire C programs in the comments (// #include <stdio.h>) * Cgo flags (// #cgo CFLAGS: -DPNG_DEBUG=1) * and now this, file embedding (//go:embed html/index.html)
Most novices would assume the commented out code does nothing, and rightly so in my opinion. Half of these features aren't even code-file specific but project-wide, making deciding which file to put them in hard and looking them up even harder.