"Names imported from a package are accessed by using the last component of the package name as a prefix (rather than the horrendous full package prefix that you often see in Java and similar languages)"
import "image/color"
x := color.RGBA{255, 255, 255, 0}
Perhaps I'm naive, but I thought you could do this in basically any language, including Java (it's been a while).
EDIT: Not sure why this was downvoted, but it's a trivial Google search[1] to show that this is indeed something that you can do in Java:
import javax.swing.JOptionPane;
class ImportTest {
public static void main(String[] args) {
JOptionPane.showMessageDialog(null, "Hi");
System.exit(0);
}
}
Which, without the import, would force you to use the full name as the author alludes:
class ImportTest {
public static void main(String[] args) {
javax.swing.JOptionPane.showMessageDialog(null, "Hi");
System.exit(0);
}
}
I'm not intending to bash Go (it has the largest quantity of awesome features of any language I've seen in a long time), nor necessarily to defend Java. It just seems a bit disingenuous to claim that this is exceptional behavior.
import org.somelib.*
x = new colors.RGBA(255, 255, 255, 0);
So you don't have to bring all the classes into scope, but you don't have to write out the full path to use them either. Just the last bit of the package-name as a qualifier.
I've been using Go for a performance-critical part of a production system and I've been thoroughly enjoying it. But certain things have been odd and frustrating.
One trait that I find frustrating is how pedantic Go is about types.
For example, if I have a float x and an int y, I can't write x / y, I have to write x / float64(y)†. The intent is to force awareness of type conversions that introduce subtle gotchas, but I don't see how it applies to the case float / int.
A better example of the same phenomenon is that alias-style types need to be explicitly cast to their aliased types, which introduces pointless noise into what would otherwise be a nice way of 'marking up' the semantic role of variables and arguments (think type index uint).
†Notice there is a common-or-garden 'int' but no common-or-garden 'float', apparently because one should always be aware of precision. Unfortunately, because of the alias type issue I mention above, 'type float float64' doesn't help.
> but I don't see how it applies to the case float / int
Does the int get converted to a float for this calculation? is sizeof(int) == sizeof(float)? Then you can lose accuracy in the conversion; MAX_INT is larger than the largest same-sized float that can be represented without rounding error.
Fair point, although for the garden-variety int (int32) and float64, this isn't the case. In an ideal world, the compiler would be smart enough to know this, and complain appropriately.
So, you're saying that you want a table that looks like this:
Type 1 | Type 2 | Allowed?
--------------------------
float32 | int32 | n
float32 | int64 | n
float64 | int32 | y
float64 | int64 | n
etc...
Seems a bit odd to have the majority of conversions banned, but allow some random-seeming exceptions that are safe.
Since implicit conversions are almost always either lossy or dangerous (think 'Fahrenheit(32) - Celsius(64)'), I think it makes sense to remove implicit promotions entirely.
It doesn't seem odd to me. In fact I think it might be a Good Thing.
Having the compiler take care of the safe cases for one makes one less likely to always blindly cast things and leave oneself open to the unsafe cases. For certain values of 'one', of course :)
As for your point about Fahrenheit, the real solution is
type DegreesF float64;
type DegreesC float64;
so that the units cannot be accidentally mixed.
Note to self: It might not be a bad idea to have alias-style types deliberately not participate in automatic numeric conversions, although I think it is a mistake they don't currently always auto-convert to their own aliased type.
Or just write your code so you don't need to convert types. I think that's entirely doable, with the possible exception of API boundaries, and removes the need to have complicated and confusing rules about allowed conversions.
And yes, as far as different unit types, you just pointed out exactly what I was getting at. It's relatively rare that you want implicit type conversions.
Maybe so, maybe so. I do agree that numeric implicit conversion is not as simple as it first seemed to me, and would have knock-on effects elsewhere.
Edit: For the record, here are the rules we've discovered so far:
1. Numeric casts: A -> B happens automatically if every value of A can be represented exactly in B. Both must be base types.
2. Aliased casts: A -> B happens automatically if A is an alias of B.
3. Automatic casts only happen if a single automatic cast is required, not more. In other words, x + y should not cause x and y to both be cast to a common type.
Indeed, in fact that's why rune has been added to the language little more than 6 months ago. Now that rune is used instead of int for representing unicode code points, int can be extended to 64 bit.
That's true, and it will probably change fairly soon, but there are a whole range of safe integer to float casts that will still exist even if int changes from int32 to int64.
And these safe casts should be transparent so that people who regularly do things with floats don't feel like second class citizens in the language.
The check cannot be made at compile time, only at run time, and even if it were possible to make the check at compile time it would still be a bad idea. At some point in the program's 20 year old life time, someone will decide that a number N used somewhere in the program has to become N+1, and N+1 doesn't work. It would be unacceptable for such a trivial change to break the program's compilation, and for a number to carry so much hidden state and meaning.
I don't understand why people are so afraid of compile time errors. They're great! They help us write code and reason about it.
If someone changes some ossified piece of code, and that has the effect that it makes an automatic cast semantically dangerous, I'd be extremely grateful if the compiler scolded me with a helpful error message.
I'm not sure what you mean by "The check cannot be made at compile time, only at run time". Are you talking about a language other than Go? Go is statically typed.
edit: I've been talking about compile time errors that prevent potentially dangerous implicit casts from happening. I think you're talking about something else.
If int64 is the default type for int, then any int will be able to contain values not representable as a float64 (you'd need a float80 or float128 to make it work). To safely convert, a run-time check will be needed.
My understanding of this is the following: having implicit casts between certain fundamental types has always been a source of error that even led to own classes of security issues. Instead, Go, like with other things, is going the explicit way: instead of documenting a list of implicit conversion rules somewhere, the language forces the programmer to think about the conversion, and to explicitly state the intention in the form of an explicit conversion between two data types.
And to be honest, I like code that is explicit in what it does. That makes things easier to read and comprehend.
As the other comment points out, whether there exists a gotcha depends on the sizes, byte -> float32, int32 -> float64, etc are fine, int64 -> float64 is not. There is no reason for the compiler to be ignorant of this.
Moreover, if float means float64 (as I'd suggest be the case) and int continues to mean int32 (which it won't forever), then a lot of common cases would not require explicit casting and be safe.
As the other comment points out, whether there exists a gotcha depends on the sizes, byte -> float32, int32 -> float64, etc are fine, int64 -> float64 is not. There is no reason for the compiler to be ignorant of this.
Yes, but too much variance in how the compiler reacts depending on local implementation details can be quite messy. The principle of least surprise would recommend leaving the feature out, especially since Go dispenses with compiler warnings. Otherwise. we could have the situation where something compiles just fine on your desktop, then breaks when compiled for a different platform like NaCL. How would the developer know this on the desktop, ahead of time? By leaving the feature out, you get a more informant and reliable tool.
If int means int64 on one platform and int32 on another platform, one should expect that some things might require care to port correctly.
A compile-time error because an automatic cast can't be performed on the new platform is infinitely better than a forced cast that silently introduces run-time errors.
If int means int64 on one platform and int32 on another platform, one should expect that some things might require care to port correctly.
That's just the way things have been. It's not a particularly good or pleasant situation. If the Go team wants to make this better, all the power to them. (And I know for a fact that it doesn't have to be this way. It's just the expectation we've come to accept as normal. Squeak runs bit-identically on over 50 combinations of OS and processor.)
A compile-time error because an automatic cast can't be performed on the new platform is infinitely better than a forced cast that silently introduces run-time errors
"Silently" here being that the programmer mindlessly puts in the cast because the compiler "forces" her to? I don't think the compiler is at fault here.
the thing is, if you've marked up the semantic use of a type, it is no longer guaranteed that it meaningfully supports all the operations the "base type" did. in your 'type index uint' example, for instance, you would typically not divide an index by an int.
Although your example argues against round-tripping the conversation rather than the conversion per se. Which I'm not proposing.
In your example, my proposal would have an index get automatically cast to an int, then undergo division, remaining an int. Who knows why you divided it? But you wouldn't get an index back.
The amazing speed of the compiler means the development cycle is a fast as a scripting language even though full optimizations are always switched on.
This is a common misconception of those who work mainly in compiled languages. In some dynamic languages, you don't just have the same edit-test-debug cycle. Advanced users of dynamic languages can conduct mini edit-test-debug within all the parts of the larger scale edit-test-debug cycle.
Go is not object-oriented. This, in my opinion, is a massive plus for the language....In the real world, things are fuzzy and they spread across multiple conceptual boxes all at once. Is it a circle or an ellipse? Is she an employee or a homeworker? Classifying things into strict type hierarchies, although seductive, is never going to work.
Here, the author is simply wrong. This is another common misconception about OO. You don't need strict type hierarchies. You don't even need classes at all for OO.
> Here, the author is simply wrong. This is another common misconception about OO. You don't need strict type hierarchies. You don't even need classes at all for OO.
There are almost an infinite number of definitions of OO, I suspect the author was referring to the generally accepted and most widespread ones, which include classes and inheritance as some of their most fundamental characteristics.
"The phrase "object-oriented" means a lot of things. Half are obvious, and the other half are mistakes."
Go implements (and improves on) the obvious ones (which by the way mostly have been around for longer than the term OO itself), while avoiding the rest.
There are almost an infinite number of definitions of OO, I suspect the author was referring to the generally accepted and most widespread ones, which include classes and inheritance as some of their most fundamental characteristics.
Yes, but everyone I know who does real work with Objects knows that the textbook definitions aren't what people do with real systems. That makes it sound like the author only knows the textbook definitions.
> Yes, but everyone I know who does real work with Objects knows that the
> textbook definitions aren't what people do with real systems. That
> makes it sound like the author only knows the textbook definitions.
Phrases like 'who does real work' and 'people do with real systems' makes it sound a bit like a 'no-true-scotsman'. I imagine that was not your intent.
Anyway, I think the point is that the common perception of object systems are likely those commonly used in Java, C++, and C#. I say 'likely' simply because I am guessing that those are the more popular by volume of code written. Go has a different approach than those.
...real systems' makes it sound a bit like a 'no-true-scotsman'
Isn't there a meta-fallacy here? Sometimes there are things that experienced people don't say, but some, like inexperienced undergraduates, do.
Anyway, I think the point is that the common perception of object systems are likely those commonly used in Java, C++, and C#. I say 'likely' simply because I am guessing that those are the more popular by volume of code written. Go has a different approach than those.
There may be some selection bias here. "Everyone I know who does real work with Objects" is by happenstance mostly working in Smalltalk. The pragmatic approach most people take there involves quite a bit of duck typing. (And lots of grousing about how those interfaces aren't documented.) There's also a lot of bad legacy code that needs a refactoring cluestick there too, tbh.
Thanks for the comments. I take your point about OO, but I was aiming at producing a broad overview of Go which necessarily means simplifying in some parts. To 90% of developers today brought up on strongly typed languages such as Java and C#, OO means single inheritance type hierarchies.
I agree with your decision to not label Go as object-oriented; to too many people that term is simply synonymous with inheritance. But rejection of the term itself doesn't mean we should automatically reject the often-related concepts of encapsulation and polymorphism, both of which we can have without inheritance.
Specifically, you say this:
I can hear the die-hards screaming already about encapsulation, inheritance and polymorphism. It turns out that these things are not as important as we once thought.
And then you proceed to give examples that only attempt to refute the usefulness of type hierarchies, without addressing encapsulation and polymorphism. I'd be curious to know what facilities, if any, Go provides for these concepts.
Go doesn't support polymorphic methods but supports polymorphic types through interfaces. Encapsulation is through the private/public naming convention I refer to in the blog post.
I can hear the die-hards screaming already about encapsulation, inheritance and polymorphism.
Any die-hards screaming about all 3 of those should be viewed with a bit of skepticism. When I talked about this a lot, folks were "meh" about inheritance, especially class-based inheritance. The only thing I see as being worthy of die-hard adherence is polymorphism. Encapsulation is there to help enable polymorphism. I'll note that the degree of encapsulation varies amongst systems that call themselves OO.
Yes, but people are actually targeting single-inheritance as the predominant means of polymorphism? What happened to interfaces? (Plus things like Strategies.)
To me your "simplification" is a bit of a straw man.
The word Go is far too overloaded in the English Language, and the difference is not always clear from context.
This title could easily have refferred to the correct meaning of 5 weeks with the programming language Go, 5 weeks playing and studying the game Go, or in a slightly colloquial usage, 5 weeks of constantly doing things and going.
And those are just the meanings that make sense in this context. It is also a verb with a wide variety of (related) meanings and forms a command part of a command in multiple computer languages (T-SQL uses Go, goto is infamous in Basic, etc).
Worse than the ambiguity of reading, the language is nigh ungooglable. Some queries work, but less clear-cut ones get swamped by pages that just happen to include the word "go." You can substitute "golang," but then you miss all the pages that only refer to it as "Go."
It would be helpful to everyone if we made it a habit to always refer to the GO language as GoLang (Instead of just sometimes). It's still quick and easy to say while also being much more specific.
I'd be interested in hearing more about the exceptions side. The usual derision is that some top level unrelated code ends up handling them, but that seems silly. A far more normal example would be a routine that has to get some resources and calls various retrieval routines which may end up accessing the filesystem, databases, or the network. Those routines could go several calls deep before throwing an exception. The top routine can then find alternatives, use cached versions, return an error etc.
I do like exceptions in Python where their use is ubiquitous and there is garbage collection. I detest checked exceptions in Java because you are forced to handle things at a level you often don't want to.
I'm 65% certain Go made a mistake not using exceptions, but would love to hear from others.
There is an exception-like mechanism called 'panic', but it shouldn't be used as a control flow tool. Panic is reserved for unrecoverable errors, not mundane situations like a failed query.
This seems extraordinarily painful. I usually have five or six stack frames between GetDataFromDatabase() and code that can raise error dialog to user or return HTTP error code. This means every single one of those stack frames are going to duplicate this annoying if err != null sequence.
The article author is quite wrong when he says "exceptions are broken by design because they force errors to be handled at points far away from the cause". Exceptions don't force you to handle errors at any particular location; they allow you to handle errors anywhere in the stack. On the other hand, return values do force you to handle errors then-and-there, which 90% of the time is a half-dozen stack frames away from your handler.
We have two orthogonal tasks which traditional exception mechanisms complect together:
1. Handling a failed assumption, which will cause the code that follows to be incorrect (I need the contents of this file to do my task, but it doesn't exist, so I can't do my task)
2. Handling a failed operation, which there is a straightforward way of working around (if I can't write to my log file, maybe write to stderr instead instead and give up)
The trouble is that the 'inner' code doesn't know which of these a given failure is, because it is context specific. Seems like with languages like Java, the default is 1, whereas with Go it is 2. They're kind of equivalent, though, because you can always convert a !ok into a panic(...).
But certainly exceptions are the nuclear option, so it seems reasonable to me that they shouldn't be the default for common operations that can fail.
Why wouldn't you want to use exceptions for #2? I posit that it is very rare for errors like RPC failures and filesystem errors to be handled at the level of the call. 90% of the time the natural handler is several stack frames up where you can raise an error dialog or write an http error response.
Return values create endless repetitive "if error return error" code, or worse - programmers get lazy and ignore return values, producing bugs that show up only in hard-to-test-for situations like transient network failures.
Why is it rare for RPC/filesystem failures to not be handled at the level of the call?
I think its much more natural for a memcache API to return an error if the server is not reachable, and I can continue to execute the current function. Similarly, I think its more natural for a "users" API to return an error if a particular user doesn't exist so I can redirect to a signup page or something, rather than throw a UserNotExistException.
And yes, errors as return values may seem to add more code to simple examples. But I find it does wonders for clarity/readability. Using "regular" control-flow for error conditions and the "happy path" makes code much easier to follow; this is as opposed to trying to intuit the different ways control can jump from the happy path into the error handling.
Also, I find that having to write the "if error return error" makes me pause to think about how to handle errors properly. For example, if the function I'm writing literally cannot proceed I will return the error. If its a really weird place to be getting an error, write it to a log and return the error. If I can ignore errors (like the memcache example above) then I keep going.
It's rare because RPC & filesystem access is usually wrapped in a library or module which does not have any business knowledge of the task at hand. A more realistic example is your getUser() call which calls 10 stack frames down to a filesystem access, and let's say you get a filesystem error. The natural thing to do is throw an exception which goes up all 10 stack frames and gets caught by getUser(), which provides some sort of "sorry, internal error" message to the client.
The notion that typing "if error return error" through 10 stack frames makes you think more clearly is absurd. Decades of C experience has shown that lazy programmers will ignore critical error conditions and introduce hard-to-find bugs because execution plows ahead past the original error.
Whether getUser() returns an error object or throws an exception is a question of high-level API design. Sometimes UserNotExistsException makes sense, sometimes a null (or error) result makes sense. That is an entirely separate issue. Any designer with significant experience will use both approaches as appropriate.
Again, I'm not disagreeing that exceptions tend to be more terse. Exceptions optimize for writability at the expense of readability. Reading linear code that uses if statements and loops is easier than code that uses try-catches. Especially trying to come up with all the ways control could jump from happy path code to error-handling code.
You tend not to be writing all 10 methods in a particular call chain at the same time. You will be writing a few methods that call each other inside a module. You paint this as a massive timesink, and I can assure you, it definitely is not.
Lazy programmers can also have catch-all exception handlers. I don't see how exceptions help make lazy programmers perform due diligence.
That being said, there is a place for exceptions. Truly exceptional conditions such as index out of bounds, or nil pointer dereference, or some internal precondition violated, should be treated in an exceptional manner. Go does this with panics, and panics are almost never caught as part of control-flow. They tend to be caught at the root of goroutines, logged, and the goroutine killed. The HTTP library, for instance, will catch panics in any goroutine it spawns, and write a 503.
I just find it odd that people treat commonplace things as exceptional. File open failed? Could not resolve hostname? Broken TCP connection? These aren't particularly exceptional things. They are probably not a result of a bug, and so should be handled by the programmer.
We seem to be going around in circles here... You say 10 layers of if error return error is not a time sink, and I say it is. I spent about a decade doing C and C++ programming before Java, and IMHO exception handling is second only to garbage collection as life-changing language improvements.
There is a key difference here: When a lazy C or Go programmer fails to check an error value, execution continues - possibly many lines or stack frames ahead before some sort of failure symptom is observable. In perverse cases this can produce silent data corruption. I spent far too much of the 90s chasing down these kinds of problems.
When a lazy programmer uses a catch-all exception handler, the error is still caught - immediately - with the full stack trace of the original problem. This is golden. Furthermore, a catch-all exception handler that prints an error message to the user/http request/whatever is often exactly the right approach.
There's a lot of stupidity in the Java standard libraries, but your examples (file failure, bad hostname, broken connection) are exactly the kinds of things that should be exceptions, and are usually best caught at a high level where meaningful errors can be reported to the user.
Right, by typing a zillion if err != nil return err lines of code. In any sophisticated app, almost every single function call is going to require this boilerplate. Annoying.
Every Go article explains this, but it doesn't answer the higher level question of why not use exceptions. Even worse you are forced to return data values along with the error code/flag/object.
I've not seen a standard error object either which means every place that looks at an error has to be intimately aware of what it is looking at. There is an os.Error but that appears to be string only so there is no errno equivalent value inside.
In languages with exceptions there is a base error/exception/throwable that has reason fields as well as methods to get tracebacks which are extremely useful.
I'm now 85% convinced Go made a mistake not using exceptions.
If you're going to do error handling right in an exception-using language, you really need to catch every exception close to its source, otherwise you won't know how to handle it properly.
I think Go's main approach is this: errors are not exceptional. Errors happen all the time, and should be considered part of the normal control flow of the program, not something to be shoved into a hidden compartment and dealt with later.
Thus using the usual control flow statements to check for errors is entirely appropriate, and it's immediately obvious when reviewing the code if someone has failed to do the right error checking.
BTW there is no os.Error type any more. There's a language-defined "error" type, which is an interface containing a single method:
type error {
Error() string
}
You only need to be intimately aware of what you're looking at if you want to take some different action based on the kind of error. This is actually quite rare (usually you care only that something failed, not how it failed), but when you need to, generally the calling package defines an error type that represents a particular kind or class of errors (for example http://golang.org/pkg/os/#LinkError)
When you have exceptions you can decide how many levels of stack frames higher to handle it, and none of the intermediary functions have to be modified. With this mechanism, every intermediary must handle the error and must agree on the error type (to some degree).
The argument that keeps being trotted out for Go's approach is consistency of handling, which is a good thing. But it is very manual especially where there is distance between the code that finds an error and the code that decides what to do about it. Doing all that manual work doesn't seem to have any benefits to me.
An error isn't just about what went wrong - it's also about what you were trying to do when it went wrong, so you can do something appropriate. If you handle exceptions several stack frames up, then you lose that information. Doing things the Go way also means you can make nice error messages that reflect the task, rather than a stack trace that only makes sense if you know the source code.
Yes, every intermediary must agree on the error type - it's the language-defined error type, which defines a single method, Error, which returns the error as a string. There's no need for any further agreement.
Functions encapsulate errors. If I call a function, it is entirely up to me how I wish to handle that error. "The code that finds an error" is the code that calls the function. All the context is local. There's no need to know how that function first encountered that error - that's part of the implementation detail of that function. At every stage, we make error handling decisions based on local context. This makes the code more maintainable, because there is genuine separation of concerns.
If you really want exceptions (a classic example is a recursive descent parser where you don't want to check every call), you can use panic and recover, making sure that callers will never see it - it's a local contract only.
How do you lose information with exceptions? You can handle an exception in the code immediately surrounding whatever detects the problem and your code semantics are no different than Go. The Go mechanism doesn't give the option of handling it several stack frames higher without having to implement handling in every single intermediary function.
The arguments I keep seeing for Go's semantics seem to the same as the ones about manual memory allocation - you must be in control every step of the way.
It looks like panic only takes a string so it isn't a good equivalent to exceptions. Whatever gets flung around should generally have enough information to make decisions and to generate meaningful error messages.
Is there a particular reason why? I mean, if the failed query requires that a potentially quite involved operation needs to be aborted, an operation that is several functions deep, isn't panic exactly appropriate?
It is quite neat and mighty convenient, but I sorta live in fear that at some arbitrary point the library will change and my code will just stop working. I konw this isn't a problem unique to Go and there are solutions, but the lack of explicit versioning makes me nervous. Then again, it hasn't bit me yet, and I certainly have benefitted from the ease.
it doesn't "compile from a url", the package name is just the url address. You still have a directory of downloaded packages analogous to Python's site-packages, and they don't update unless you explicitly update them. The difference is that if you say `go get github.com/whatever/foo` the package source will be downloaded to a location such as `/usr/local/go/src/pkg/github.com/whatever/foo`.
One solution is to add version to url. Some examples from Google API for Golang (http://code.google.com/p/google-api-go-client/):
code.google.com/p/google-api-go-client/tasks/v1
code.google.com/p/google-api-go-client/books/v1
If you'r really worried about code changes that might break your project, you're free to fork the library, e.g. if it's on GitHub. Even though that's not really a sophisticated way of versioning, it's quite convenient.
1. Hope that the projects you depend on have a tag that you can reference as your import or that you can acquire specifically with `go get`
2. Fork them and keep track of them yourself.
3. Use the tool (I can't find the link) that manages version dependencies per-project so that you can have different versions for different projects based on your needs (think virutalenv).
The tools are also really nice. "go build", "go run", "go get", etc. Building dependency management and build management built into the language - even if it is not super sophisticated yet - is a great idea.
> Enforcing a brace placement style and other conventions means the compiler can be super fast.
Surely the lexical-analysis phase is not the bottleneck in compilation? (The context strongly suggests that 'other conventions' means 'other lexical conventions'.)
This is a stupid question, but it's beyond difficult to search for an answer (I've really tried!):
Can I import Java libraries into Go? If I can't find a Go library to do what I need, what are my options besides messaging to another process, or (presumably) finding a equivalent C library (I don't trust my C skills enough to make wrapping something a confidence inspiring no-brainer).
Is there general language/library interop details out there any golangers could point me towards?
Go's built in cgo tool makes it very easy to wrap C code. See http://golang.org/doc/articles/c_go_cgo.html. Crucially, you can write most of the wrapper in pure Go and the C boilerplate code is generated automatically.
You can run shared libraries on Android very easily. The shared libraries can do OpenGL (video) and OpenSL (sound), plus most regular libc stuff. Most of the Android APIs are exposed as Java and you can use JNI to call into the shared library (ultimately it is C). As far as I can tell you can't generate a shared library from Go so this "normal" approach is off the cards. (There are also issues like how the Dalvik garbage collector and Go GC would interact.)
To have a pure binary would effectively require a reimplementation of the framework of Dalvik and all the various classes/methods and would be a huge undertaking. It would be extremely unlikely to install on existing Android versions and would only be in a future version. Or in other words it would be many many years before you could depend on it being on Android devices even if this was done for the next Android version.
The shared library thing is the biggest problem though. Android applications are really mashups of components from the same app or others (see Activities, Services, Content Providers, Receivers). There isn't actually a main() method or equivalent. Instead the components are loaded and called as needed.
Well, there's no ambiguity in this case, because "int" is a keyword, known to the compiler, and will always be interpreted as a type. But if you had something like "x*y,z", then the meaning of this would depend on whether x is a type or not.
I think this is more in reference to the common C error of:-
int* x, y;
when you meant to declare both x and y as int pointers, here only x will be a pointer and y will be a straight int.
Obviously the go designers could have chosen to simply make this mean that both x and y are pointers, however this would be somewhat confusing for those familiar with C.
Yes, Go is different. Yes, the language designers made a lot of decisions that people will complain about [at first]. Yes, I'd [really] love to have generics (even just for simple code repeat cases).
But goodness I love writing Go. Sorry, it's hard for me to be terribly specific outside of, for some reason, I'm very productive with it and I love the standard libraries. And where they're lacking Google Code, Github and IRC/play.golang.org make up for it.
I agree that it is a pleasure to use. I also suspect that generics will be forthcoming after everyone understands the 'theory of Go' a little bit better.
Unfortunately, the standard libraries that I've sampled have felt inconsistent. Two examples I happen to remember:
1. The strconv package has a Atoi (a wrapper around ParseInt) but no Atof -- instead you use ParseFloat directly, and you have to give it as a second argument one of the integer values '32' or '64' to set the precision it parses at (why not use an enum or two funcs?).
2. The bytes package has versions of Index that search for single bytes (IndexByte) as well as byte slices (Index), a nice performance-friendly touch. However Split only has the byte slice version. SplitByte would probably be twice the speed.
If you are going to write a package to stand the test of time, be consistent.
For what it's worth, I've looked into the 'Split' case, and the performance difference when specialized to the single-byte case is about 2%, which is mostly because Split already has built-in specialization for the single-byte case, which amounts to a couple extra instructions in a function whose running time is dominated by allocation.
I think they made the right choice there; the Go team seems very good about optimizing only where it matters; there's lots of low hanging fruit, but the majority of it isn't very useful fruit.
Just for fun, I just looked into it too. Which factor dominates depends on the kinds of strings; for large strings, extra instructions in the loop matter very much. I'm processing very large strings.
I did 128 runs on a byte array of length 2^24. It has delimiters placed at positions {2^i, i < 24}.
I tested my implementation against both the "bytes" package implementation, and a copy of the relevant portions of the "bytes" package (to account for any odd effects of inlining and separate compilation). I did the set of timings twice in case there was any GC.
Here's the wall time in milliseconds for the three implementations, on a 2010 Macbook Air.
My single-byte implementation is about 40% faster than the local version, and 70% faster than the "bytes" version. Not quite twice, but I wasn't far off.
But aside from performance, there is just consistency of interface. Once you've established a 'Byte' variant of some functions, you should do it for all the common functions.
It doesn't sound like you're using the benchmarking tools that Go provides; I'd recommend using that if you're not.
Ah, yeah, I was testing a much much smaller byte array with multiple split points.
I'm not terribly surprised that in your case you've found the hand-coded byte version to be faster (though the difference is more than I would've guessed; care to post the code?)
However, I'm still not sure it's merited in the standard library. Split() could pretty easily be further specialized to basically call your single byte implementation or equivalent at the cost of a handful of instructions per call. Alternately, if you know you're dealing with very large byte slices with only a few split points, it is only a couple lines of code to write a specialized version that is tuned for that. The same argument could be make for IndexByte, but I'd claim that IndexByte is a much more fundamental operation in which one more often has need for a hand-tuned assembly implementation. I wouldn't say the same for Split. There's a benefit to having fewer speed-specialized standard library calls, and I don't think splitting on a byte with performance as a primary concern happens often enough to merit another way to split on a byte in the standard library.
But I'm certain that reasonable people who are smarter than me would disagree.
Some of the performance I was seeing on my crappy benchmarker evaporates using Go's benchmarker. But there is something else afoot. Try changing 'runs', which controls the size of the inner loop (needed to get enough digits of precision):
Why did it suddenly jump from 1 billion outer loops to just 1? I think there is a bug in the go benchmarker here, because if you take into account the factor-of-4 difference in work and then divide by 1 billion, it looks like the first set of ns/op are actually correct and aren't being scaled correctly.
Either way, the increase in performance is now only about 10%. Which I agree, isn't anything to write home about. More bizarre is that the bytes package one is faster for runs = 32 but not for runs = 128. I can't make head or tail of that, or why it should matter at all -- unless there is custom assembly in pkg/bytes that has odd properties inside that inner loop.
But this is only one half of my complaint: it's the interface that matters, and I see no good reason for having IndexByte, but no CountByte and SplitByte, contrary to what you say about which is more fundamental. Having to construct a slice containing a single byte just to call SplitByte and CountByte left me with an bad taste in my mouth.
EDIT: Not sure why this was downvoted, but it's a trivial Google search[1] to show that this is indeed something that you can do in Java:
Which, without the import, would force you to use the full name as the author alludes: I'm not intending to bash Go (it has the largest quantity of awesome features of any language I've seen in a long time), nor necessarily to defend Java. It just seems a bit disingenuous to claim that this is exceptional behavior.[1] http://www.leepoint.net/notes-java/language/10basics/import....