Hacker News new | past | comments | ask | show | jobs | submit login
Go as an alternative to Node.js for very fast servers (studygolang.com)
176 points by g0lden on Aug 14, 2013 | hide | past | favorite | 162 comments



That test is rigged to be a best case scenario for Go. How often do you send 1MB responses down to the client?

If you send 3KB responses then you would see both setups are much closer in performance.

Factor in some actual I/O and the difference will be even less. Then you'll eventually realize using either one makes little difference when it comes to performance.

This is why micro benchmarks are pure jokes. A real world site has a mix of response sizes, database I/O and caching. It's the only way to test something properly and you'll see if you properly test both there will be little difference in performance.


Original author of the post in question here.

Please note that the microbench was "rigged" by the author of Node when he was first presenting it several years ago (spelled out in the article).

If you need a tl;dr, it's this: I don't care for JavaScript as a language. Many make the argument that JavaScript should be adopted widely server-side because of its speed. I assert that languages should be evaluated not only for performance but for maintainability, feature sets, standard library, etc. Go provides a great combination of execution speed, development speed, and ease of maintainability.


The development speed isn't very good with Go because you have to re-invent almost everything yourself because the web libs are seriously years behind other platforms.

The default template language is also really archaic and no one has created a solid alternative yet that's actually well tested and used by the masses.

It might have good execution speed and the language itself might be nice but the only thing that matters is going from point A to point B. Go will not get you there faster than other languages and the execution speed is a non-issue for pretty much every platform (even rails) if you use tools available to you to their fullest.

P.S., I compared Go to Node almost a year ago and even wrote a mini framework for Go to resemble a smaller version of Express. I eventually just said fk it and stopped because the gains were not even close to being worth it.


My understanding is that many people try Go and decide they don't like it, as you did; at the same time, many have experiences that are similar to mine and they embrace it. To each his own.

The word "rigged" in your original comment implied dishonesty on my part. I just wanted to clarify that the JS code used in the microbench was Ryan Dahl's, and mine was just a port to Go. I was merely giving Go the same task that he did.


I did like it, what I didn't like was solving common web app related issues that were solved on other platforms years ago. That's what completely killed my urge to consider Go.

I don't want to spend most of my time solving boring issues. I want to spend most of my time writing features for apps I make. Being productive makes me happy but everyone has different happiness triggers I suppose.

I see performance as a somewhat solved problem in 99.999% of the cases so to me a language is not even worth looking at anymore if that's all it offers. I'm not fortunate to be involved in the other 0.001% where easy to do caching might not be enough.

I didn't mean to imply you were dishonest. I just wanted to point out that sending out 1MB responses is kind of silly.

It took me like 2.5 years of messing around to finally realize all I care about is being able to take an idea and turn it into working code that's maintainable and scales good enough for the time being.


may I ask, what exactly did you have to re-invent? It's true that Go is very young and so are the libreries, and that only recently has started to be seen as web language. But my experience is that the most common problems for web are solved with things like Revel. Is true that is a bit feature basic and there are not that many other options, but I wouldn't consider it insufficient.


To be fair I didn't use revel. Instead I just used pat (the route mapper) and started to try and recreate most of what express does using go's stdlib because nothing else really existed yet.

The gorilla toolkit's APIs are inadequate and it seemed like quite a few people agreed too because most of them said they rolled their own solutions to do things that certain gorilla libs did but with a more intuitive and friendly API.

Go really isn't that young either. It's been what at least 4 years now? There's no excuse. It's not like the language is 6 months old.

As for re-inventing stuff, it's more so go's ecosystem rather than revel's shortcomings although revel does have its own shortcomings if you were to compare it to something like rails and not express.

Revel seems to be somewhere in between rails and express in terms of opinions which is fine but if it's going to make me less productive then I'm simply not going to use it.


> Go really isn't that young either. It's been what at least 4 years now? There's no excuse. It's not like the language is 6 months old.

1.5 if you count from the first stable release. Which IMHO is what matters, before was just a experiment with a lot of uncertainties. It took ruby 9 years to arrive from 1.0 to Rails. They were other times, sure, but still.

I still consider it very young, or at least I don't know of any other younger language with a better ecosystem.

> Revel seems to be somewhere in between rails and express in terms of opinions which is fine but if it's going to make me less productive then I'm simply not going to use it.

Well, everyone has their preferences, certainly in Go there are not many choices so is not for everyone. But I'd keep an eye on it. Things can change very quickly.


I'll believe it when I see it. Almost nothing has changed since I went through my Go adventure which was like 9 months ago I think.

9 months to me is a huge amount of time. I don't want to have to wait years to be super productive. I want to be super productive right now and by using other platforms I can be.

For a new viable web platform to be accepted it needs to really explode in popularity. It has to offer MASSIVE gains.

Look at node, it offers performance and also offers the benefit of using the same language on both ends. That's pretty neat... maybe, but I think you would at least agree with me that node's popularity and growth has been unmatched. Even so, it's still quite far behind rails and I don't think it will catch up.

I'm not some massive rails fan boy either. I only started using it when 4.0 came out because the ease of caching seemed interesting to me and I was looking for an excuse to go from node/express to something more opinionated just to see if it was more productive.


> but I think you would at least agree with me that node's popularity and growth has been unmatched.

Of course I agree. But node is an unique case, since javascript is not exactly new, and had a trillion people using it when node appeared. You can not expect that to happen again any time soon, unless all main browsers start supporting client-side PHP or something crazy like that.

Go is not 'there' yet, sure, and maybe it'll never be. But its growth can not be judged by node's standards, no really new language could compete then.

For a new language is growing quite good: http://www.google.com/trends/explore?q=golang#q=golang&cmpt=...

Of course this doesn't mean you have to use it, but IMHO it can not be regarded as a failure in anyway.


My point was that even with node's mind boggling growth it still hasn't really overtaken rails when it comes to developer productivity.


I like Go the language; I dislike Go the platform (currently). What's there is great, but there's not enough of it. It's a great 1.0 release: stable, fast, but sparse.

Luckily, Go has a decent head of steam behind it, so when I revisit it in a few years that should be a solved problem ;).


At the very least, calling 50% more lines a "similar line count" is disingenuous. 5 extra lines doesn't look like much, but on a typical multi-thousand line codebase, you're looking at thousands of extra lines.


Right...and you have the benefits of npm with node. AFAIK, there is nothing comparable with Go... and frankly, even the Go code in that terse example takes more lines and isn't as readable (IMHO) as the Node server...


If you read the original blog post on http://blog.safariflow.com/2013/02/22/go-as-an-alternative-t..., one of the commentators actually attempted the comparison between node.js and go, and his results for node.js was significantly faster.


Go version 1.1 was more than a significant increase in net handling speed.


Not only that, but although the particular test described in the linked blog post did not include database connectivity, we saw a massive increase in database performance in 1.1 versus 1.0.2 due to a fix in the sql package [1]. That was back in Round 4 of our project [2].

Just something to be aware of if anyone reading has Go 1.0.2 installed and has not yet upgraded. 1.1 is worth the upgrade.

[1] https://code.google.com/p/go/source/detail?r=45c12efb46

[2] http://www.techempower.com/blog/2013/05/02/frameworks-round-...


Thanks for the link to the original. Perhaps "February 22, 2013" should be identified in the title here?

That explains the use of Go 1.0.2.


Is it possible to write Go in a functional way? Are there first class functions? Anonymous functions? If so it seems it would be possible to write highly functional code given the flexibility of interface{}


> Are there first class functions? Anonymous functions?

Yes.

> If so it seems it would be possible to write highly functional code given the flexibility of interface{}

Not really, you'd have to add type assertions everywhere as go has neither generic functions nor user-defined generic types (only a handful of special-status types get to have type parameters, IIRC they're chans, arrays, slices and maps). That makes higher-order operations extremely cumbersome.

I'm also uncertain whether scalars (e.g. integral types) can be used through interface{}.


> I'm also uncertain whether scalars (e.g. integral types) can be used through interface{}.

They can (and there are optimizations to avoid heap allocation in some cases when you do use them), but you still have to write downcasts everywhere.


It is monomorphically typed, however, which limits the use of higher-order functions.


It seems to me that Go is incredibly crappy for writing in a functional style. Have a look at Rust if that's what you're after.


I wrote a library for Go that makes functional programming with channels-as-streams very comfortable, for me[1]. There is a problem with needing to box and unbox variables from the 'empty interface{}' type all the time but it's a minor wart IMO.

It's a really small library that was made in response to my own intuition (similar to yours) that the Go standard library wasn't really embracing a functional style. I haven't used it in any major projects but for small one-off things it's proved pretty useful. It basically works like pipes on the command line.

1: https://github.com/eblume/proto


Up to a point, but go's limited datatypes make it difficult to write the kind of code you're used to writing in haskell/f#/etc.


A haskell version with the Warp server performs just a little bit worse than the go version (6 secs vs 7 secs) with a little bigger minimal latency (68 ms vs 71 ms).

  go version go1.0.2
  runghc 7.6.2
  cabal packages of today
Runned with "runhaskell main.hs" on localhost over loopback :)

  import qualified Network.Wai as Wai
  import qualified Network.Wai.Handler.Warp as Warp
  import qualified Network.HTTP.Types as HTTP
  import qualified Data.ByteString as ByteString
  import Blaze.ByteString.Builder.ByteString (fromByteString)
  
  main = do
      let port = 8000
      Warp.run port app
  
  app req = do
      let n = 1024*1024
      let bytes = fromByteString $ ByteString.replicate n 100
      return $ Wai.ResponseBuilder HTTP.status200 [] bytes


Am I right in thinking that Haskell servers will likely be winning on these benchmarks after Mio is released in 7.8.1? https://news.ycombinator.com/item?id=6198068


Since Warp seems to use many light-weight(green) threads behind the scenes, and Mio is mainly about improving multithreaded performance on actual multi-core set ups, the answer seems to be yes.

However, it would probably only help if you use -threaded (which you should, anyways).


Have you tried precompiling with -O2? IIRC runhaskell runs code like GHCi: interpreted bytecode by default (for source .hs files).


Yes I have and I invite you to do it also by yourself to check it. There is no significant difference in this benchmark between "ghc -O2" and "runhaskell". I didn't investigated why... but I would be very interested in the details :)

Also "-threaded" gives the usual penalty of bookkeeping and bad caching and stuff. You don't need threads in such a benchmark. Though I am mildly surprised that "node" runs faster with multiple processes :)


Don't use "runhaskell", it's more of a "script mode".

Use: ghc -threaded -O2 main.hs && ./main


There is no significant difference in this benchmark. Do you measure something different?


I didn't measure, no.

I suppose maybe it uses the optimized compiled packages code, and the main code doesn't really do anything, so -O2 doesn't matter.

However, I would still expect "-threaded" to have an effect.


Yes, a negative one for this benchmark :)


there are first class functions, and closures work the way you would expect. Since it's statically typed and there are no generics, there's no way to implement something like a left fold; you'd have to implement it on a per-type basis, but you could certainly do that. I believe the answer you're looking for is "no", but Go's functions are nothing to sneeze at. http://jordanorelli.tumblr.com/post/42369331748/function-typ...


> you'd have to implement it on a per-type basis

Alternatively you can implement everything on interface{} and force the user of the higher-order function to use type assertions everywhere.


If you're interested, I did an in depth write up on how to do it. [1]

[1] - http://blog.burntsushi.net/type-parametric-functions-golang



I did the benchmarks outlined at the end of the article.

Summary of results:

1. Node.js v0.10.15, single worker: 46.2 seconds

2. Node.js v0.10.15, cluster 8 workers using naught: 17.2 seconds

3. Go 1.0.2, GOMAXPROCS left default: 3.5 seconds

4. Go 1.0.2, GOMAXPROCS=8: 3.7 seconds

Detailed results below:

1. Node.js v0.10.15, single worker

  Concurrency Level:      100
  Time taken for tests:   46.217 seconds
  Complete requests:      10000
  Failed requests:        0
  Write errors:           0
  Total transferred:      10486510000 bytes
  HTML transferred:       10485760000 bytes
  Requests per second:    216.37 [#/sec] (mean)
  Time per request:       462.168 [ms] (mean)
  Time per request:       4.622 [ms] (mean, across all concurrent requests)
  Transfer rate:          221580.08 [Kbytes/sec] received
  
  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:        0    0   0.2      0       3
  Processing:   193  461  36.2    450     944
  Waiting:       16  235 127.3    235     534
  Total:        193  461  36.2    450     944
  
  Percentage of the requests served within a certain time (ms)
    50%    450
    66%    467
    75%    470
    80%    486
    90%    492
    95%    514
    98%    517
    99%    545
   100%    944 (longest request)
2. Node.js v0.10.15, cluster 8 workers using naught

  Concurrency Level:      100
  Time taken for tests:   17.199 seconds
  Complete requests:      10000
  Failed requests:        0
  Write errors:           0
  Total transferred:      10486510000 bytes
  HTML transferred:       10485760000 bytes
  Requests per second:    581.41 [#/sec] (mean)
  Time per request:       171.995 [ms] (mean)
  Time per request:       1.720 [ms] (mean, across all concurrent requests)
  Transfer rate:          595408.80 [Kbytes/sec] received

  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:        0    0   0.2      0       3
  Processing:     7  171 116.4    149     739
  Waiting:        5   96  81.9     71     710
  Total:          8  171 116.5    150     740

  Percentage of the requests served within a certain time (ms)
    50%    150
    66%    197
    75%    236
    80%    266
    90%    324
    95%    397
    98%    438
    99%    493
   100%    740 (longest request)
  
3. Go 1.0.2, GOMAXPROCS left default

  Concurrency Level:      100
  Time taken for tests:   3.542 seconds
  Complete requests:      10000
  Failed requests:        0
  Write errors:           0
  Total transferred:      10486730000 bytes
  HTML transferred:       10485760000 bytes
  Requests per second:    2823.16 [#/sec] (mean)
  Time per request:       35.421 [ms] (mean)
  Time per request:       0.354 [ms] (mean, across all concurrent requests)
  Transfer rate:          2891181.71 [Kbytes/sec] received

  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:        0    1   0.3      1       3
  Processing:     9   35   2.2     34      56
  Waiting:        0    1   1.3      1      22
  Total:         12   35   2.3     35      57

  Percentage of the requests served within a certain time (ms)
    50%     35
    66%     36
    75%     36
    80%     36
    90%     37
    95%     38
    98%     39
    99%     41
   100%     57 (longest request)
  
4. Go 1.0.2, GOMAXPROCS=8

  Concurrency Level:      100
  Time taken for tests:   3.657 seconds
  Complete requests:      10000
  Failed requests:        0
  Write errors:           0
  Total transferred:      10486730000 bytes
  HTML transferred:       10485760000 bytes
  Requests per second:    2734.54 [#/sec] (mean)
  Time per request:       36.569 [ms] (mean)
  Time per request:       0.366 [ms] (mean, across all concurrent requests)
  Transfer rate:          2800429.67 [Kbytes/sec] received

  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:        0    1   0.4      1       3
  Processing:    19   36   2.5     35      57
  Waiting:        0    1   1.1      1      16
  Total:         20   37   2.5     36      58

  Percentage of the requests served within a certain time (ms)
    50%     36
    66%     37
    75%     37
    80%     37
    90%     38
    95%     39
    98%     42
    99%     51
   100%     58 (longest request)


There were significant performance increases in Go 1.10 (see:http://golang.org/doc/go1.1#performance), I would also suggest you benchmark a more current version.


When I see someone doing something like this and:

  1. They are not using the latest Go (atm 1.1.2)
  2. GOMAXPROCS is not = the number of CPUs
  3. They are using ab rather than something more scalable like wrk
I assume they either don't know what they are doing, or want to make Go look bad.

On a side note, Go is already known to be much faster at web-serving than node.js: http://www.techempower.com/benchmarks/#section=data-r6&hw=i7...


I did the test that the article suggested, with the versions of the tools that I have installed on my system. This is a comment on a blog article, not an attempt to engineer the perfect benchmark.

Also in this bench Go danced circles around node. I dunno what you're complaining about.


>I dunno what you're complaining about.

I'm complaining because you and other people will go on to:

  1. Use a version of Go in your development that is slower (as has much worse memory characteristics) and lacks new features.
  2. End up running all your programs on single core until you understand GOMAXPROCS
  3. Use ab to bench real things which is bad
So "my complaining" is trying to help you.


I agree with Voidlogic here. Perhaps his tone was a little confrontational, but his intentions were good. :)

    Go 1.1 > Go 1.0.2
    wrk > ab
In particular, ab should be avoided whenever possible. Apache Bench (ab) remains a single-threaded tool, meaning that for high-performance servers in particular, your exercise will run into the limits of Apache Bench before the limits of the server(s) being tested. The LigHTTP team has a multi-threaded clone named WeigHTTP that I would recommend if you want something that is functionally similar to ab and uses similar command-line arguments.

Wrk uses a slightly different argument syntax from ab and WeigHTTP but has some upsides:

1. Wrk is also multi-threaded.

2. In our experience, wrk is slightly higher-performance than WeigHTTP (~5 to 10%).

3. Wrk provides average, maximum, and stdev for latency.

4. Wrk provides a time-limited mode (rather than request-count limited), which is appealing for some test types.

In my experience, as long as you configure Go and node to use all of your cores, Go will benchmark better than Node in any permutation of these configuration variables:

    Go 1.0.2 and node tested with ab.
    Go 1.1 and node tested with ab.
    Go 1.0.2 and node tested with wrk.
    Go 1.1 and node tested with wrk.
V8 is a very fast JavaScript runtime; node.js is modestly quick at handling HTTP requests. But among the many features of Go is a high-performance HTTP package. If you've used both, it isn't all that surprising that Go's performance clocks in higher than node.


Is this the wrk you're referring to:

https://github.com/wg/wrk


Yes. Sorry for not providing the link!


> Perhaps his tone was a little confrontational

Sorry, that was not intended.

>but his intentions were good. :)

They really were...


>> wrk > ab

Can you please expand on why? I recently bumped on wrk and am in process of evaluating switch from ab, thank you


daemon13, you might also be interested in reading this thread: https://news.ycombinator.com/item?id=6114282


thank you!


Sure. I just edited my message above.


Thank you!

You've got cool blog, esp. prior posts selection!!


Can you help me by telling my why the way I used GOMAXPROCS is wrong, and how to use it correctly?


Not so much wrong as my impression was you didn't understand it. If that is not the case I apologize.

For example you said: GOMAXPROCS left default. I don't know how you set your environment vars, are they unset so default = 1? You didn't mention in your post that GOMAXPROCS=1/single node worker test cases are really toy test cases (useful only for benchmarking). So if you know everything below, then great! Maybe other people can learn:

GOMAXPROC is the number of OS level threads that the Go runtime is multiplexing Go tasks (goroutines) over.

So if GOMAXPROCS = 1, When one goroutine blocks, another will run, BUT, you will never use more than one OS thread and thus you will never use more than one logical core.

Setting GOMAXPROCS correctly is per application. For example GOMAXPROCS=1 might be right for a commandline tool or a program that was designed to have multiple instances started on the same machine. That being said, a vast majority of the time any high load application I have written is best with GOMAXPROCS=<# of logical CPU cores>. So Go always has concurrency, but GOMAXPROCS gives it parallelism. GOMAXPROCS > 1 also will allow the garbage collector to have more parallelism too.

So if we are talking about a benchmark like this, ideally we want to process requests made in parallel in a parallel fashion. A clear sign is that if you use Node.js worker cluster you should probably test with Go at the same number.

All this being said, depending on your CPUs implementation details, you would sometimes be better off setting both your node worker count and GOMAXPROC to the number of physical rather than logical cores. Sometimes simultaneous multi-threading (SMT, aka hyperthreading) actually creates more overhead than any concurrency gains it offers.

In short when testing something like this I would always test. 1. n = 1 (with a disclaimer note) 2. n = physical CPU count 3. n = logical CPU count Where n is the number of GOMAXPROCS/node worker threads.


Okay so this is why I was confused by your comment:

I did use GOMAXPROCS with the number of logical cores that I have, and I did test the node cluster with the same number.


How much of an advantage, if any, does Node provide because it's more "mature", or at least has been used for far longer.

For example, when I go to StackOverFlow I see that Node has far more questions asked:

http://stackoverflow.com/questions/tagged/go

http://stackoverflow.com/questions/tagged/node.js


I'm really uncertain about node being judged more "mature".

Being developed by Thompson and Pike makes you gain something like 30 years of "maturity". Plus, running in production in the Google infrastructure is far more of a proof of maturity than running the chat service of every hackathon project for 2 years.


Mature means many things, including being able to get support, packages, existing code, solutions to common problems, etc. I'm sure Go is a solid project.


So do I just email all my questions directly to Thompson and Pike, then?

In this case, language maturity means the quantity of reference material available to help troubleshoot problems as the arise.


The golang nuts mailing list is extremely active, you'd have no issues finding answers there.


The difference is sending emails to some mailing list and waiting for someone to answer, vs. entering a few keywords to Google and have the top result be the answer you want because someone already asked the same question before.

This is not a stab at Go or its maturity, but rather a realistic assessment of the importance of answers being (nearly) instantly available when you're working on a project or troubleshooting a critical issue.


Agree on that, yet, if you want the big picture, you should multiply that by the probability of having something to troubleshoot, or a critical issue, in the first place.

That would give you a better estimate of the risk you're taking by using that language. And that's where the 30 years of PL design from GO's author gets some importance.

You could consider my argument a bit far fetched, but GO's design has been explicitly focused on keeping things simple and no-surprise, and so far all the reviews seem to agree on that point. That could compensate for the lack of results in Google (especially compared to what node.js forces you to do)


Yeah I don't doubt you'd be able to find answers to any Go question, the point though is that you'll have to work a little harder than with a "more mature" "language" like Node.js.


Two smart guys writing a programming language instantly gets you 30 years of maturity. Did I just wake up in some bizarro universe or something ?

And nobody cares if Go is being used for some tiny, insignificant part of the Google infrastructure. Get back to me when it is used for a stock exchange, betting site or complex web app.


What stock exchange is powered by node.js?


dl.google.com handles all the downloads for google chrome, factory images for android, eclipse.. not exactly tiny and especially not insignificant

complex web app: Cloudfare https://www.cloudflare.com/railgun BBC http://www.quora.com/Go-programming-language/Is-Google-Go-re... Soundcloud http://backstage.soundcloud.com/2012/07/go-at-soundcloud/


If this is your requirement, may I suggest:

http://stackoverflow.com/questions/tagged/java


Much more so than internet posts about it, there are countless modules for NodeJS as well. Need to upload and store and return images in Monogo using streaming? Done. Need to keep servers running despite errors? Done. Amazon S3? Done. Need an alternate cloud provider? Done. Need login using OAUTH/OpenID/whatever? Done. On and on. Most times you need some general purpose web functionality, there's already a couple Node modules in that area, if not an entire framework or sample app and tutorial targeted at that area.


By now, I'd assume the language run times are in the noise for thinking about maturity. The most likely culprits will be the application code followed by the library code. So the real question should be "Are there any big scary monsters in the libraries I need?". Followed by "Which language do I think will best enable me to hit my target?".



I did find this bit:

  // When calling .end(buffer) right away, this triggers a "hot path"
  // optimization in http.js, to avoid an extra write call.
  //
  // However, the overhead of copying a large buffer is higher than
  // the overhead of an extra write() call, so the hot path was not
  // always as hot as it could be.
  //
  // Verify that our assumptions are valid.
https://github.com/joyent/node/blob/master/benchmark/http/en...


Test nr 3 sends a whopping 2891 Mbytes/s over http ? I know it's localhost, but wow ?!


What tool are you using for your benchmarks?


apache benchmark. I did exactly what the article suggested to benchmark.

http://httpd.apache.org/docs/2.2/programs/ab.html


ab gets very....iffy at anything past moderate loads due it's multi-threaded design (lack there-of). I'd highly recommend wrk: https://github.com/wg/wrk

Much better at 100 reqs/sec and up.


Try with GOMAXPROCS set to the number of real cores on your test machine (e.g. ignore hyperthread cores).


OK I did GOMAXPROCS=4. 3.6 seconds Node cluster count 4: 17.5 seconds


What do you think the difference is between a "real core" and a "hyperthread core"?


An HTT cpu can still only execute one instruction at a time; there are often instructions that cause the cpu to have idle time (stalled waiting for data) and hyperthreading allows for the cpu to spend that otherwise idle thread making progress on a separate task list. However, this still means that the two scheduled threads are contending for the same execution unit... The parent is suggesting that this contest may cause more of a performance degradation than the advantages that HTT provides, which would be easily resolved with some benchmarking :D


note: the above was a simplification / based on my understand of HTT cpus as of about 2008. apparently things got more complicated in the last 5 years :D The bottom line remains that HTT can cause slowdown in some cases and you should benchmark with it turned off as well.


No, not even close. Each thread on a Haswell CPU, just as an example, has 8 execution ports. Each Haswell core has ten execution units. The CPU can retire way more than one instruction per cycle.


You are absolutely correct, but you could also afford to be a bit more polite. Sentences like "In other words, you have no idea what the difference is" might be true but they're also a bit rude.


They aren't absolutely correct at all, and aaron was actually close to the money. Thrownaway is fundamentally misrepresenting (or misunderstanding) how threads -- in an operating system sense, and what we are talking to here -- relate to microcode and execution units in a core.


In theory


It's not just theory, you typically get about 3 instructions per cycle in practice.


There is a huge difference between the two. Hyperthreaded cores only give you a speed up in specific situations where additional work can be squeezed into the pipeline.

http://en.wikipedia.org/wiki/Hyper-threading

The speedup is very work dependent and in practice for things like web pages and api servers you generally only get another 20-40% of performance from them rather than a full 100%.


In other words, you have no idea what the difference is.

A hyperthreaded Intel CPU has M functional units and N decode/issue pipelines.

A non-hyperthreaded Intel CPU has M' functional units and N' decode/issue pipelines.

A hyperthreaded Intel CPU with hyperthreading disabled has M functional units and N/2 decode/issue pipelines.


Well it shouldn't matter that much anyway, I don't think modern kernels will put the same process threads on the same physical core.

That's the reason intel tells you to shut hyperthreading off if your operating system doesn't support it.


I'd be humored to hear your idea of what the difference is, given the misplaced use of scare-quotes.

A hyperthread core is a virtual core -- it is not actually a core at all but is a re-purposed, possibly stalled physical core. While it can improve some scenarios, in some cases (particularly core-saturating benchmarks) it can actually hurt performance.

This is hardly an out there or controversial statement. Further I didn't say to disable hyperthreading, I said to try setting parallelism to the physical cores. Again, nothing, whatsoever, controversial about that.


It is an "out there" statement because it's entirely, radically incorrect. A processor thread represents a full-blown decode and issue pipeline. A core represents a set of execution resources. Each pipeline can dispatch to any execution unit equally. In case of contention for the same execution unit, one thread issues immediately and the other thread issues next.

If you don't disable hyperthreading, but instead run four threads on an 8-thread CPU, it is extremely likely that the threads will be scheduled on the first two cores/four threads and the other two cores will be shut down, especially on the newer intel CPUs with "turbo" features where this strategy can have large benefits.


The operating system schedules threads across cores, and the processor has zero say in the matter (further, the execution units are primarily to facilitate branch to essentially execute future scenarios). Both Linux and Windows are hyperthread aware, and will schedule threads to physical processors first, then to hyperthread virtual processors (given that it shares resources with the physical core and can sabotage performance).

This is common knowledge, and your laughable obnoxiousness, which anyone who has ever worked with multithreaded code on a HT processor knows is farce, rings pretty ridiculous.


No, the power-aware scheduler in Linux does not work as you describe. On a turbo-capable Intel CPU, if there are N program threads that will fit on M cores where M is less than the total cores on a socket, and the CPU will enter P0 state, then the threads will run on as few cores as possible and the remaining cores will be shut down.


Two threads, each running at 100%, will be assigned to two physical cores. This is reality, and is obvious given that assigning it to a physical core and a hyperthread core at most will give you about 130% instead of the 200% two physical cores will provide.

Unless, of course, you've set a power profile to prioritize power efficiency, but that would be an absolutely ridiculous assumption given that we're talking about benchmarks.


Go doesn't mean just fast for me; in that space there is also the JVM. To me, it fits the fast and lean and easy to deploy space.


The headline of this article bothered me in this regard. Node.js is decidedly middle of the pack in terms of performance:

http://www.techempower.com/benchmarks/

In addition to various JVM technologies, there are faster technologies in C++, PHP, and Lua.


I think you will find that in many actual applications PHP is not faster than Node.

Further, once you apply a framework to paper over some language warts, performance is often terrible and can only be rescued by extremely liberal caching.

If this were not so, it would be hard to see the motivation for HipHop.


Might the motivation simply be to get closer to C++ performance?

I really don't value this type of response where the benchmark is brought into question, yet nothing is substituted as evidence. In the few cases were someone does claim that they rewrote a sufficiently complex system in Node from some other language, it's impossible to discount that the performance changes come from architectural choices rather than language choices.

Playing devils advocate, I've heard the same claim you put forth here about Node.js: it gets much slower in real apps because you have more slow JS code being run vs the very fast C libraries that back the core of Node.js.

To me, benchmarks like the one I posted above are the most compelling form of evidence we have. What I take from it is that a great many "boring" languages and frameworks are really very fast. It's not the answer that most people want to hear of course; it goes against the current popular hype.


What's easier about deploying Go vs with the JVM? Deploying stuff I've written in Clojure has been pretty easy.


It's more your 'typical' Java app.

Which is often the huge array of JARs and folder structure you typically need to bootstrap.


My understanding is you can just build an uberjar that contains all that, then just drop that jar on your sever and run it (java -jar myuberjar.jar).


go build

It compiles a binary, just send it to the server and it will run, no need to even have go installed on the server (or any libs really).


Doesn't this require your development machine to use the same architecture and OS as your server?


No. It's simple to build Go toolchains for whatever your deploy os/arch target is. Then you just cross-compile.

* Download Go source

* Extract, cd go/src/

* GOOS=linux GOARCH=386 ./make.bash #this will build the linux_386 toolchain

* GOOS=linux GOARCH=amd64 ./make.bash #linux_amd64

* GOOS=linux GOARCH=arm ./make.bash #you get the picture

GOOS can be windows, darwin, freebsd, netbsd, plan9.

Then when you want to cross-compile your app, you do:

GOOS=linux GOARCH=arm go build myApp.go

That's it. Now you have a statically linked binary that you can drop on whatever your target is. As someone who has had to cross-compile a lot of C and C++ code, I find this simplicity to be a huge win.


Thanks for that. It's been a while since I used Go. That is quite nice.


Did you compare with a JVM toolkit with similar concurrency libraries like Akka/Scala?

Am curious to find out the results. There was a recent, don't remember which, comparison that put Scala way ahead of Go possibly because of a superior GC.

Wonder if the results correlate


Also, Go as an alternative to <whatever> for very slow servers. I use Go on my Pi and the thing flies.


I've worked with both node and Go. There is a lot of hyperbole and fluff points about Go's strengths in the original article (sorry to the author!). Things that are touted as huge wins for Go have equally better things in node. Both platforms are great and they both have appeal for different people.

My eloquent post was eaten by an expired link on the original article but here are some counter-points:

The commutative property applies to all code that has semicolons to end lines. Use jslint/jshint/an IDE. The symbol is "optional" not marked as "leave it out because it makes scripts nicer."

Use jslint/jshint/an IDE to prevent globals. Seriously. It's the same as running go fmt on your code.

Forcing people to use "channels" as a best practice to accomplish scaling is the same as the best practice of callbacks - but yes, channels are nicer to use. Node standard modules require callbacks by default as Go standard library implements channels by default.

32-bit integers in JS don't have float problems because the precision doesn't break

Typing in Go can still be annoying for some situations. If you're dealing with external content (creating an API with mutable content that you still need to read) it can be annoying at best (e.g. reminds me of writing C). Types in Go are a nice implementation though.

npm is way better than go get and there are at least 3 projects in Go trying to replicate npm's ease-of-use

The vim/emacs syntax highlighter is nice but it's awfully frustrating if you don't use vim/emacs. This is due to Go's young age, but you shouldn't be forced into using a certain editor to get syntax highlighting.

The alternative to "go fmt" for Javascript is a good IDE or jslint/jshint.

In Go, you get a lot in the standard library, but you miss out on a lot in the community. Yet another young language problem, you wind up having to roll your own for a lot of things that should just exist. It can be frustrating looking for answers because you might be the first person working on the problem in the language.

Also, just as a general thing I've perceived, people seem to argue static types vs dynamic types more than that a particular language is "better." Go and JS/node are both great!!!! Go is typed, JS is not and you deal with the consequences in both situations. Static typed languages have big faults with external data handling that sometimes cripple features (or makes them far more difficult to accomplish). Dynamic typed languages can fall victim to variables being used incorrectly (especially when not using an IDE). That's the biggest bulk of the difference in my experience.


> Typing in Go can still be annoying for some situations. If you're dealing with external content (creating an API with mutable content that you still need to read) it can be annoying at best (e.g. reminds me of writing C). Types in Go are a nice implementation though.

Indeed, that's probably the biggest flaw I've found working with GO's type system. Trying to work with unknown n-level JSON is real pain.


I'm not entirely sure what problem you're referring to here, but I wrote a tool to enable with mapping JSON to native Go types: https://github.com/ChimeraCoder/gojson . Basically, you specify the narrowest possible type that covers all expected (valid) inputs.

If your JSON is effectively "strongly typed" (most APIs are), this is going to be a huge win for you.

If your JSON is not, then you'll have a problem in any statically typed language (not just Go), because you need some way to reason about the type. You'll also have the same problem with dynamically typed languages as well - the main difference is that Go will never do implicit casts (I would view this as a good thing).

I've done a lot of work in Go involving JSON (that's originally why I wrote the above tool - to save myself time), and in practice, it's rare that I have to do anything more than decode, check for an error[0], and then move on.

[0] Which is something everyone should do in all languages, not just Go - once you've confirmed that there is no error, you rid yourself of a lot of possible bugs that could pop up later on in harder-to-discover places.


Very interesting! It might be exactly what I was missing. I ended up using simplejson[0] but it still felt like a workaround. I'll give yours a try next time. Thanks!

[0]https://github.com/bitly/go-simplejson


> 32-bit integers in JS don't have float problems because the precision doesn't break

Unless you try multiplying them.


As long as the result is less than 9007199254740992 (9 quadrillion; 2^53). Who uses numbers that large? Regardless of the practicality, you can use a library if you need support for bigger numbers (e.g. crypto's BigInt.js).


If I am starting from zero, which is easier to lear/better to learn; Go or Node?


Node and javascript. Vastly more learning resources and applicability at this time.


So beyond the ironic comment mentioned before with the Golang Hello world tutorial using Chinese, what is the real reason behind the Chinese following behind Go?

It seems like this site confirm there must be a pretty dedicated Golang following, but I cannot figure out for the life of me why. Does anyone actually know?


For me, Go has the least "What the hell does this code do?" and "Why isn't this code working?" of any language I've ever used.

I'd say Go isn't an impressive programming language, but a very impressive software engineering language.


> Go isn't an impressive programming language, but a very impressive software engineering language.

Curious, what is the distinction?


He means that Go will not impress programming language geeks and academic PL researchers with cool evolved syntax or cutting edge features, but is a fine language for actually building software.

In the same sense that a language that "in theory" has tons of cool stuff might be unusuable in practice (due to complexity, opaqueness, lack of libs, strange syntax etc).


IMO, simplicity, no surprises, doing stuff in a way that "makes sense" without a lot of boilerplate or cruft.


Maybe because a lot of choice has been removed from the system? I've worked with a bunch of people from different countries and find that the more strict the gov't, the less they like choice, in general. All anecdotal of course...

Giving one guy from the Ukraine the typical choices at a restaurant here in the US would freeze him in place.

    What would you like to drink?  Tea

    Sweetened or Unsweetened?  Sweet. frustration += 1

    Sugar or artificial sweetener.  Sugar.  frustration += 2

    Soup or Salad?  Soup.  frustration += 7

    Which soup?  Minestrone, Pasta Fagioli, Italian Wedding?  frustration += 14
and on and on... Ukraine started packing his lunch.

Then there was the guy from Iceland... He took 10+ minutes to decide what to order in a new place and once he figured out what he liked in a particular place, he never gave the waitstaff a chance to offer him a choice. This, well done, salad with x dressing, with unsweetened tea. Iceland liked making the choices but only a few times.

Some people like to be dictated to: Code will be formatted like this. Braces will be like this. etc. The author of the article thinks these are a feature while I find it obnoxious.

My brain requires braces to line up in the left column, anything else slows me down. Maybe I'm just old?


It does not make sense to group peoples reaction to choice like this. As a counter example Apple products are very popular in the USA and other Western Countries, and Apple does not give you many choices.


I used a lot of weasel words in my comment to avoid follow up comments like this.

Maybe, anecdotal etc...

It was simply something I noticed from lots of years working with lots of people from lots of different countries. No offense was intended.


Off-topic on the subject of restaurants:

I use the ML approach of combining other people's algorithms. :)

My first time in a new place, I'm usually with friends, and I imitate their orders. On the rare times when I'm not, I ask the waiter what the favorite plates of he/the other people in the restaurant are, what they order when they eat there.

This way even my initial orders are lightning-fast and already judged as 'tasty' in comparison to other offerings. And over time, I can investigate other plates for myself.


    > My brain requires braces to line up in the left column, 
    > anything else slows me down. Maybe I'm just old?
One amazing property of humans is our ability to re-shape ourselves in new environments; to literally learn new tricks. The handicap you identify here is only serving to diminish your potential.


I'm not sure it's a handicap...

I've been paid for work in roughly 12 different programming languages over a 28+ year career, among them python (no braces at all, significant white-space), Rexx (keywords as braces), C, C++, Perl, Pascal, Bourne shell, Awk, C#, (braces braces and more braces), x86 assembler and several employer specific scripting tools.

Guess which ones are most comfortable? Certainly not python though it is the most bang for the buck of the languages I use and I'd rather eat hospital food for the rest of my life than write (or, deity forbid read) one more line of x86 assembler.

What was it we were talking about again?


Not necessarily. I suppose the hello world tutorial shows hello world in Chinese mostly to show Unicode. And as for this link, this is just an external link (http://blog.safariflow.com/2013/02/22/go-as-an-alternative-t...) that someone posted on a forum. Things like these happen on every forum in every country.


Yeah, not sure what happened but this was the link below a few minutes ago.

http://bbs.studygolang.com/thread-278-1-1.html

Notice that, despite English content, the forum looks Chinese-language oriented. I think it is cool, but not normal to me.


If you meant the Hello World example, it is probably because the board game Go originates from China: http://en.wikipedia.org/wiki/Go_(game)


I doubt it. If I recall the example correctly, it was "Hello, 世界!" Since "世界" means "world" the example is really the same "Hello, world!" example from C, only with one of the words changed to something that clearly requires Unicode. That was certainly my reaction when I saw the example: "oh, good, easy Unicode support"

I expect the choice of Chinese as the language is probably the fact that you can't play code page games very easily with Chinese, which you could with Korean, Arabic, etc. It is probably the most widely spoken language with a non-Latin character set. Also, chances are very good that a Chinese-writing colleague of the example writer was readily available. Although, who knows, it would be fun if it were because of the board game.


There was some discussion here but no OK me seems to know more https://news.ycombinator.com/item?id=6161399


Your original comment:

> They seem to be making a concerted effort to make Go Chinese friendly https://code.google.com/p/go-zh/

The Chinese fork seems to be a project started by Minux. Minux (Shenghou Ma) is a Go contributer and is very active in the Go community, but he does not work for Google, as far as I know.


Interesting. Unfortunately I did not make it past year one Chinese in university, so I do not remember enough. If I had known, I would go find out if he contributes a lot on the Study Golang BBS.


Do you mean specifically why there is a dedicated Chinese following in particular, or just in general?

Go is a clean, intuitive language with a very robust supporting ecosystem and a strong concurrency model. It builds fast, small code and makes quick work of big problems. Why shouldn't there be a big following of one of the better platforms to come out in a long time?


My only gripe is its intended use as a systems programming language.

The runtime kind of makes it a silo; ie: hard to bind other languages to it through an FFI.

Of course if I am mistaken or there's something being done to address such a scenario then I will be much happier seeing more and more infrastructure code shipping in Go.


> My only gripe is its intended use as a systems programming language.

It's not. It's a new Java, not a new C++.

> ie: hard to bind other languages to it through an FFI.

More or less impossible: GC and goroutines are not optional so you'd need to cleanly setup and shutdown the Go runtime. You'd have to embed Go as you do Lua or Python.


> > My only gripe is its intended use as a systems programming language.

> It's not. It's a new Java, not a new C++.

Not according to the Go developers. "Go is a general-purpose language designed with systems programming in mind." http://golang.org/ref/spec#Introduction


It's not the first time they could be quoted with completely off-their-rocker statements.


Go has the same FFI language that most languages do: C.

http://golang.org/cmd/cgo/


Wrong way around. By "bind languages to it" agentultra talks about using Go from other languages, not using C from Go.


>hard to bind other languages to it through an FFI.

Go has absurdly simple FFI through, as mratzloff mentioned, C. This is self-promotion and apologies, but see my submission history for two examples.


I know this post is about speed, but you should always remember that with node.js you only need JS developers while with Go you are probably going to need both Go and JS (for front-end stuff) developers.

Just my two cents.


At the same token, having frontend JS devs that don't have much experience in writing backends can leave you with a mess to clean up or completely re-write in the future.

I think this "benefit" of using the same language for front and backend is pretty over-hyped, as well. In theory, I can agree that it sounds good. In practice, use the best tool for the job on both ends to fit your team's abilities and strengths.

The other neat thing is that if your frontend and UI are cleanly separated, you can swap them out individually without the other even noticing. If our backend gets to be too sluggish, I can replace it with something lower level gradually over time (Go?) without leaving a bunch of deprecated backend JS cruft to clean out.


As a single developer on a project, I use js on the front-end, NodeJS and CouchDB (views written in js). This means that every step of the way is js, json and http, which is extremely liberating for me, as someone who's just starting out with web development.


I have a rich, client-side, fully-offline-capable javascript application. I started out with a different language for the backend, but over time the benefit of switching to Node and sharing one codebase became overwhelming.

Just to give one example: any data query can get answered locally or from the server, depending on what's already cached and whether you're online. Before, I had to implement every call twice and make sure they stayed in sync. Now I can implement once and run the same code in both places.

And other interesting possibilities open up. The server can just run the client's data synchronization code to pull changes from another server, giving realtime server-to-server replication nearly "for free".


I love having the server and client share the same codebase.

I'm working on a webapp that mimics functionality of an existing desktop application -- mostly as a self-educational project. As I'm writing it and adding features, I can push processing from server-side to client-side and vice versa with hardly more than a copy and paste of the relevant chunk of code.

Say I don't really care about the security of some particular process, and I really don't want to use up any additional resources on the server for it, I can just push that load to the client browser in a couple clicks.

Did I just write some feature into the client that I suddenly realize presents a security problem? With node, there's no need to change my frame of mind or rethink how to do something in a different language -- just copy and paste from client.js to server.js and do some slight cleanup.


There's a school of thought in software dev that a developer is a developer is a developer, and should be able to do all the things a developer should do.

That said, you hire "backend-focused" developers, "front-end" developers, etc., but to me that still means they should be able to do whatever tasks are necessary (Go, JS) to get their job done.


Nay. You're going to need both backend (routing, security, caching, HTTP, more caching, SQL/ORM/whatever, MVC, search/document stores, etc) developers and frontend (HTML, CSS, HTML5 (Re: "the hard stuff"), AJAX/JSONP/CORS/whatever). Off the cuff, knowing another language is going to be very little of the overhead, especially when you consider one side is dealing with local services and the other is dealing with mostly DOM (or using some library / "another language" to view the DOM in another light).


I am a "full stack" developer. I know the ins and outs of both my front and back end stacks (Groovy in my case), and I have a pretty good sense of the patterns required in this case.

However, one of the more frustrating pieces of this is that I never dive deep into one paradigm. I'm continually having to work bilingually. When I was working with node.js, I didn't feel nearly as much bilingual stress (even though I've written backends in python, ruby, java, groovy, and scala!)

As such, my single person one-off and side projects will be in node.js. It's about cognitive overhead in the moment. I agree that it's a mediocre solution for larger teams and enterprise is not its sweet spot, but it's great for projects in the small.


I didn't, by any means, intend to say it is not a good setup or denigrate you. I don't think anyone is wrong for using, even big teams, and I might even use the setup myself for certain projects. Only that it is rare to find really good full-stack developers, and those developers willing to work equally on both ends of the stack. And that a majority of work is thinking about the architecture and product; only hand waving 5% of the time is going to be spent context switching languages. I'd lose most of that time not having a richer language than what JS 1.5 offers, and if backend JS ever gets 1.7+ features, I am certain the browsers are not in the same stride, at which point I lose again.


With the new transpilers, I don't think that's as much of a problem so long as one stays to modern (IE9+) browsers. (Let's see if this really comes true, but I know a few node.js types are anticipating this.)

In the domain I'm in (internal enterprise cough) I find that most people are expected to be full stack. On the other hand, these tend to be more simple UIs in general, so the front end knowledge isn't as vital (cue 'good' full stack developer comments...)


I solved the cognitive overhead problem by developing everything via progressive enhancement. This way JavaScript has a clearly defined role and place in the process and I don't have to think in multiple languages. Of course, a lot of developers these days don't want to do progressive enhancement and dismiss it as "impossible".


Not impossible as much as not worth the effort on the dollar. Reddit is fairly simple -- just a threaded-comment engine, if you will. You want the up/down rave to be progressive? So now every up/down is a form; already, drastically increasing the page's weight but I digress. You hook the body for form submits and reduce out your up/down forms; we are done, despite having a bunch of forms marking up the page.

Don't forget your form needs to be comprised of up and down submit-buttons; wouldn't want the progressive-CSS lords coming down on us.

Those fearful of JS come along... Collapse comments? Refresh the page, but don't forget to pass previous state of comments previously collapsed! Submit an up arrow? Don't forget that blob of state! Loaded deep-nested replies? Push it into state! Careful; none of this matters to 99.9% of your users, because they are understanding they'll lose the statefulness on refresh, while everything stateful happens inline and without refreshing the page and bringing in sections of the page as dictated by our stateful blob.


What you're describing is not progressive enhancement. It's a mental exercise in re-implementing a website with workflows designed for heavy JavaScript usage without using JavaScripts. Which is silly.

For example, if you ever tried PE for real you would know that there is no point in implementing collapsible comments in pure HTML. That's exactly the kind of functionality you "enhance" via JavaScript. Of course, like most modern developers you've never really tried the practice, so you don't have any experience with such things.


So, with progressive enhancement...

I have a main screen with a query whose processing takes over a minute to perform it on all of the data, but seconds for incremental changes after that. Each row returned has a detail screen associated with it, and the users have a requirement to be able to move to a detail screen, update a record, and return to the main screen without continually incurring the heavy query penalty each time they return to the main screen. Business also has a requirement that the database of record is normalized. (Denormalizing and precomputing is good, but you run into problems when the feed updates.) Do you...

A. keep a Nosql (say... redis) cache, precompute, and perform two commits? This adds a non-trivial amount of complexity keeping them in sync, not including watching for jobs from outside feeds that can update at any time. This isn't bad for a simple system, or a system with a lot of engineers, but it's non-trivial to get right and easy to find out a bug after the fact.

B. Keep it all in the session? This has its own problems: if using an ORM, you're going to be increasing your memory load per user substantially. Otherwise, you're having to keep a separate data structure around and marshal/unmarshal, creating an additional complexity layer, and I haven't seen a great way to do this.

C. Keep a single page application and implement a polling mechanism behind the scenes to keep the information fresh? With a system like Ember or Angular, this may not be trivial but it's not terrible either, and I'd argue it's superior to a redis cache.

I know that I can provide a better user experience by prefetching. I don't know how to do this with progressive enhancement.


No one stops you from doing polling as an enhancement to a working HTML-only version. However, "single-page" has absolutely nothing to do with this, and neither do client-side MVC frameworks. I mean, there are not a prerequisite to do polling, merely your choice.


Perhaps I misunderstand you. How would you implement the requirements above? How would you move to a detail screen and back in an HTML only solution that, most importantly, does not require rerunning the query each time?

And most importantly, you've now created two code paths, both of which need testing, both of which require maintenance, and one of which (in a web application) will never be used. What's the ROI?

Now, for some domains (say, e-commerce), progressive enhancement is the right way to go. I've done it. For web applications, which are the extension of client-server applications designed for a limited captive audience (employees), I don't see the ROI.


Or developers that are comfortable with both JS and Go.


Are these trivial micro benchmarks, (simply respond with an empty 200) not totally misleading to focus on? Especially when you start to factor in many real workloads that involve disk or network io.


Its not empty because its 1mb but it is misleading because in the original article someone's comment showed Node going much faster than Go. If it were empty then Node would be faster because you can get 4000 req per sec if its empty. I think something weird is going on though because it should be more than 300 per sec with the new Node version even if its 1mb.


I'd like to see go compared to netty as well. Is there a good place for this kind of thing, SO obviously doesn't care for it.


I think there is a typo in the go sample program.

bytes = 100

Should be

bytes[i] = 100

Right or wrong?


you're right. unless i really don't understand Go code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: