Hacker News new | past | comments | ask | show | jobs | submit login

I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?



I think it depends on what kind of data you're dealing with. If you know the shape of your data, it's pretty trivial to create a struct with json tags and serialize/deserialize into that struct. But if you're dealing with data of an unknown shape it can be tricky to work with that. In Python because of dynamic typing and dicts it's a little easier to deserialize arbitrary data.

Go's net/http is also slightly lower level. You have to concern yourself directly with some of the plumbing and complexity of making an http request and how to handle failures that can occur. Whereas in Python you can use the requests lib and fire off a request and a lot of that extra plumbing just happens for free and you don't have to deal with any of the extra complexity if you don't want to.

I find Go to be bad for interviewing in a lot of cases because you get bogged down with minutiae instead of working directly towards solving the exact problem presented in the interview. But that minutiae is also what makes Go nice to work with on real projects because you're often forced into writing safer code


It comes down to how the standard library makes you do things. I don't think there's any reason why a more stringly-typed way of handling JSON (or, indeed, a more high-level way of using HTTP) is outside of the realm of possibility for Go. It's just that the standard library authors saw fit not to pursue that avenue.

This variability is honestly one of the reasons why I dislike interviews that require me to synthesize solutions to toy problems in a very tightly constrained window of time, particularly if the recruiter makes me commit at the outset to one language over another as part of the overall interview process. It's frustrating for me, and, having been on the other side, it's noisy for the interviewer.

(In fact, my favorite interview loop of all time required that I use gdb to dig into a diabolical system that was crashing, along with some serious code spelunking. The rationale was that, if I'm good with a debugger and adept at reading the source that's in front of me, the final third of synthesizing code solutions to problems and conforming to institutional practice can be dealt with once I'm in the door.)


My favourite tech interview (so far) was similar: "here's the FOSS code base we're working on. This issue looks like about the size we can tackle in the time we have. Let's work on this together and solve it".

I got to show how I could grok a code base and work out where the problem was quickly, and work out a solution to the problem, and how I understood how to contribute a PR. Way better than random Leetcode bullshit, and actually useful: the issue was actually solved and the PR accepted.


I'm not a fan of this approach because candidates may see it as a "cheap" way to do actual work without being payed.


I like your story about debugging during an interview. I can say from experience, you always have one teammate that can just debug any problem. I am always impressed to watch and learn new techniques from them.


This has also been my experience, yeah. My interviewers were very interested in watching me rifle through that core dump. (:

Ultimately, it feels to me like selecting for people who both can navigate existing code and interrogate a running system (or, minimally, one that had gone off the rails and left clues as to why) is the right way to go. It has interesting knock-on effects throughout the organization (in, like, say, product support and quality assurance) that are direly understated.


In our case we give some high-level description beforehand (which mentions working with REST apis) and allow candidates to use any language of their choice.

Also in our case the API has typing in form of generated documentation and example responses. I even saw one Go-candidate copying a response into some web tool to generate Go code to parse that form of json.

I can also say that people who chose Java usually have even more problems, they start by creating 3-4 classes just to follow Spring patterns.


I think other languages cause folks to understand JSON responses as a big bag of keys and values, which have many convenient ways of being represented in those languages. When you get to Go and you want to parse a JSON response, it has to be a well-defined thing that you understand ahead of time, but I also think you adapt when doing this more than once in Go.


If I had one complaint, it’s the use of ‘tags’ to configure how json is handled on a struct, such that it basically becomes part of the struct’s type. It can lead to a fair bit of duplication of structs whose only difference is the json handling, or otherwise a lot of boilerplate code with custom marshal/unmarshal methods. In some cases the advice is even to do parse the json into a map, do the conversion, and then serialise it again!

The case I ran into is where one API returned camelCase json but we wanted snake_case instead. Had to basically create another struct type with different json tags, rather than having something like decoders and encoders that can configure the output.

I like Go and a lot of the decisions it makes, but it has its fair share of pain points because of early decisions made in its design that results in repetitive and overly imperative code, and while that does help create code that is clear and comprehensible (mostly), it can distract attention away from the intended behaviour of the code.


As an aside, you may be interested in some of the ongoing work to improve the Go JSON serializer/deserializer:

https://pkg.go.dev/github.com/go-json-experiment/json


That’s some good news that will hopefully smooth json handling out.


You could wrap it in another struct and use a custom MarshalJSON implementation.


    var res map[string]any
    err := json.Unmarshal(&res)


Uh huh....and what comes next?

Trying to descend more than a couple of layers into a GoLang JSON object is a mess of casts.


Well, one used to have https://github.com/mitchellh/mapstructure which assisted here, but the lib then got abandoned.



The same things that happens in JS or python.

If you get a key wrong it throws (panics in go)


I wasn't talking about getting the keys wrong, but rather the insane verbosity of GoLang - `myVariable := retrievedObject.(map[string]interface{})["firstLevelKey"].(map[string]interface{})["secondLevelKey"].(string)` vs. `myVariable = retrievedObject["firstLevelKey"]["secondLevelKey"]`

"Oh, but that's just how it is in strongly-typed languages" - that may well be true, but we're comparing "JS or python" with GoLang here.


Especially when you're not certain of the type used for numbers.


> I do programming interviews and I found candidates struggling a lot in doing http request and parsing response json in Go while in Python its a breeze, what makes it particularly hard, is it lack of generics or dict data type?

Have you considered that your interview process is actually the problem? Focus on the candidate’s projects, or their past work experience, rather than forcing them to jump through arbitrary leet code challenges.


Making an HTTP request and dealing with JSON data is a weed-out question at best. Not sure if you are interpreting the grandparent comment as actually having them write a JSON parser, but I don't think that's what they meant.


I either had that come up in an interview recently myself, OR it wasn't clear to me that I was allowed to use encodings/json to parse the json and then deal with that. I happened to bomb that part of the interview spectacularly because I haven't written a complex structure parser in years given every language I've used for such tasks ships with proper and optimized libraries to do that.


Well these are not arbitrary, we work with a number of json apis on a weekly basis, supporting the ones we have and integrating new ones as well. This is a basic skill we are looking for, and I don't see it as a "leet code challenge".

Candidates might have great deal of experience debugging assembly code or generating 3d models, but we just don't have tasks like that.


There is a dict-equivalent data type in Go for JSON (it's `map[string]any`), it's just rather counter-intuitive.

However, as a Go developer, I'm one of the people who consider that JSON support in Go should be burnt down and rebuilt from scratch. It's both limited, annoying, full of nasty surprises, hard to debug and slow.


There was a detailed proposal to introduce encoding/json/v2 last year but I don't know how far it's progressed since then (which you probably already know about but mentioning it here for others):

https://github.com/golang/go/discussions/63397


I've done, literally, hundreds and hundreds of coding interviews and an http api was a part of lots of them. Exported vs non-exported fields and json tags are about the only issues I've seen candidates hit in Go and I would just help in those kinds of cases. Python is marginally easier for many.

The problem was java devs. Out of dozens upon dozens of java devs asked to hit a json api concurrently and combine results, nearly ZERO java devs, including former google employees, could do this. Json handling and error handling especially confounded them.


Hold on, did you just say Go doesn't have a Dictionary data type?

I'm a Javascript, Lua, Python, and C# guy and Dict is my whole world.


It does. https://go.dev/blog/maps

What the poster was alluding to is that you usually prefer to deseialize to a struct rather than a record/dict/map


Not a programmer, so this is every programmer's chance to hammer me on correctness.

No, Go doesn't have a type named Dict, or Hash (my Perl is leaking), or whatever.

It does have a map type[1], where you can define your keys as one type, and your values of another type, and that pretty closely approximates Dicts in other languages, I think.

[1]: https://go.dev/blog/maps


So, these types (and many more) are hash tables.

https://en.wikipedia.org/wiki/Hash_table

They're a very common and useful idea from Computer Science so you will find them in pretty much any modern language, there are a lot of variations on this idea, but the core idea recurs everywhere.


I have a quibble here. A hash table, the basic CS data structure, is not a two-dimensional data structure like a map, it is a one-dimensional data structure like a list. You can use a hash table to implement a Map/Dictionary, and essentially everyone does that. Sets are also often implemented using a hash table.

The basic operations of a hash table are adding a new item to the hash table, and checking if an item is present (potentially removing it as well). A hash table doesn't naturally have a `V get(key K)` function, it only naturally has a `bool isPresent(K item)` function.

This is all relevant because not all maps use hash tables (e.g. Java has TreeMap as well, which uses a red-black tree to store the keys). And there are uses of hash tables besides maps, such as a HashSet.

Edit: the term "table" in the name refers to the internal two-dimensional structure: it stores a hash, and for each hash, a (list of) key(s) corresponding to that hash. Storing a value alongside the key is a third "dimension".


I think I'd want to try to decode into map[string]interface{} (offhand), since string keys can be coerced to that in any event (they're strings in the stream, quoted or otherwise), and a key can hold any valid json scalar, array, or object (another json sub-string).


That of course works, but the problem is then using this. Take a simple JSON like `{"list": [{"field": 8}]}`. To retrieve that value of 8, your Go code will look sort of like this:

  var v map[string]any
  json.Unmarshal(myjson, &v)
  lst := v["list"].([]any)
  firstItem := lst[0].(map[string]any)
  field := firstItem["field"].(float64)
And this is without any error checking (this code will panic if myjson isn't a json byte array, if the keys and types don't match, or if the list is empty). If you want to add error checking to avoid panics, it gets much longer [0].

Here is the equivalent Python with full error checking:

  try :
    v = json.loads(myjson)
    field = v["list"][0]["list"]
  except Exception as e:
    print(f"Failed parsing json: {e}")
[0] https://go.dev/play/p/xkspENB80JZ


And, if you hate strong typing, there's always map[string]any.


Really, the mismatch is at the JSON side; arbitrary JSON is the opposite of strongly typed. How a language lets you handle the (easily fallible) process of "JSON -> arbitrarily typed -> the actual type you wanted" is what matters.


    > arbitrary JSON is the opposite of strongly typed
On the surface, I agree. In practice, many big enterprise systems use highly dynamic JSON payloads where new fields are added and changed all the time.


Go has had a dict-like data type from the jump; they're called "maps" in Go.

Some of early Go's design decisions were kinda stupid, but they didn't screw that one up.


Go has maps, json parsing and http built in. I'm not exactly sure what this person is referring to. Perhaps they are mostly interviewing beginners?


Go maps have a defined type (like map[string]string), so you can only put values of that type in them. A JSON object with (e.g) numbers in it will fail if you try and parse that into a map of strings.

As others have said, the issue with Go parsing JSON is that Go doesn't handle unstructured data at all well, and most other languages consider JSON to be unstructured data. Go expects the JSON to be strongly typed and rigidly defined, mirroring a struct in the Go code that it can use as a receiver for the values.

There are techniques for handling this, but they're not obvious and usually learned by painful experience. This is not all Go's fault - there are too many endpoints out there that return wildly variable JSON depending on context.


I feel like good JSON handling is sort of table stakes for any language for me these days.

The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.


> The pain of dealing with JSON in Go is one of the primary reasons I stick mostly with nodejs for my api servers.

Unless you're dealing with JSON input that has missing fields, or unexpected fields, there is no pain. Go can natively turn a JSON payload into a struct as long as the payload's fields recursively match the struct's fields!

If, in any language, you're consuming or generating JSON that doesn't match a specific predetermined structure, you're yolo'ing it and all bets are off. Go makes this particualr footgun hard to do, while JS, Python, etc makes it the default.

Default footguns are a bad idea, not a good idea.


This.

In $other_language you'll parse the JSON fine, but then smack into problems when the field you're expecting to be there isn't, or is in the wrong format, or the wrong type, etc.

In Go, as always, this is up front and explicit. You hit that problem when you parse the JSON, not later when you try to use the resulting data.


Go's JSON decoder only cares if the fields that match have the expected JSON type (as in, list, object, floating point number, integer, or string). Anything else is ignored, and you'll just get bizarre data when you work with it later.

For example, this will parse just fine [0]:

  type myvalue struct {
    First int `json:"first"`
  }

  type myobj struct {
    List []myvalue `json:"list"`
  }
  js := "{\"list\": [{\"second\": \"cde\"}]}"
  var obj myobj
  err := json.Unmarshal([]byte(js), &obj)
  if err != nil {
    return fmt.Errorf("Error unmarshalling: %+v", err)
  }
  fmt.Printf("The expected value was %+v", obj) //prints {List:[{First:0}]}
This is arguably worse than what you'd get in Python if you tried to access the key "first".

[0] https://go.dev/play/p/m0J2wVyMRkd


It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.


> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

To me it looks like a footgun: if the parsing failed then an error should have been signalled. In this case, there is no error and you silently get the wrong value.


Yeah, I presented that wrong. It's not actually a failure as such.


> It totally makes sense from a Go perspective: You created a struct, tried (but failed) to populate it with some json data, and ended up with a value initialised to its zero-value. This is fine :)

I do agree that there are good reasons on why this behaves the way it does, but I don't think the reason you cite is good. The implementation detail of generating a 0 value is not a good reason for why you'd implement JSON decoding like this.

Instead, the reason this is not a completely inane choice is that it is sometimes useful to simply not include keys that are meant to have a default value. This is a common practice in web APIs, to avoid excessive verbosity; and it is explicitly encoded in standards like OpenAPI (where you can specify whether a field of an object is required or not).

On the implementation side, I can then get away with always decoding to a single struct, I don't have to define specific structs for each field or combination of fields.

Ideally, this would have been an optional feature, where you could specify in the struct definition whether a fields is required or not (e.g. something like `json:"fieldName;required"` or `json:"fieldName;optional"`). Parsing would fail if any required field was not present in the JSON. However, this would have been more work on the Go team, and they generally prefer to implement something that works and be done with it, rather than working for all important cases.

Separately, ignoring extra fields in the JSON that don't match any fields in the struct is pretty useful for maintaining backwards compatibility. Adding extra fields should not generally break backwards compatibility.

> One of the techniques for dealing with JSON in Go is to not try to parse the entire JSON in one go, but to parse it using smaller structs that only partially match the JSON. e.g. if you endpoint returns either an int or a string, depending on the result, a single struct won't match. But two structs, one with an int and one with a string - that will parse the value and then you can work out which one it was.

I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?


> I have no idea what you mean here. json.Unmarshal() is an all-or-nothing operation. Are you saying it's common practice to use json.Decoder instead?

No, I mean you create a struct that deals with only a part of the JSON, and do multiple calls to Unmarshal. Each struct gets either populated or left at its zero-value depending on what the json looks like. It's useful for parsing json data that has a variable schema depending on what the result was.


Umm, you can unmarshall into a map[string]any, you know ?

    dataMap := make(map[string]any)
    err = json.Unmarshal(bytes, &dataMap)


You can, but then it's a lot of work to actually traverse that map, especially if you want error handling. Here is how it looks like for a pretty basic JSON string: https://go.dev/play/p/xkspENB80JZ. It's ~50 lines of code to access a single key in a three-layer-deep JSON.


Its more like 30 lines of code without the prints. However, one generally should code generic functions for this. The k8s apimachinery module has helper functions which is useful for this sort of stuff. Ex: `NestedFieldNoCopy` and its wrapper functions.

https://github.com/kubernetes/apimachinery/blob/95b78024e3fe...

Ideally, such `nested` map helper functions should be part of the stdlib.


Sure, in production you'd definitely want something like that, but the context was an interview exercise, I don't think you should go coding generic wrappers in that context.


It does (it's called a map) and Go does have generics, the previous poster clearly doesn't know what they're talking about.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: