Hacker News new | past | comments | ask | show | jobs | submit login
Goji: a web microframework for Go (goji.io)
239 points by zenazn on April 22, 2014 | hide | past | favorite | 86 comments



This is going to sound a little dismissive, but I don't mean it to be:

I'm not sure I understand the value that these frameworks offer beyond the HTTP server interface Golang supports out of the box, plus a URL router like "pat" (or whatever the cool kids are using now other than "pat").

I see the clean middleware abstraction, but I find the idiomatic closure-based implementation of middleware adds only a couple extra lines of code, and in return I get total flexibility.

What's this doing that I'm not seeing? I'm sure there's something; I'm writing this comment out of ignorance.


Actually, Goji grew out of a single deficiency in "pat": the fact that it does not have a standard way of defining request context.

The big use case here is how you'd write a middleware that did authentication (using API keys, session cookies, ???) and emitted a username for other middleware to consume. With net/http, you end up with a lot of coupling: your end handler needs to know about every layer of middleware above it, and you start losing a lot of the benefit of having middleware in the first place. With an explicit middleware stack and a universal interface for middleware contexts, this is easy: everyone can code to the same single context object, and instead of standardizing on weird bound variables (or a global locked map a la gorilla), you just need to standardize on a single string key and a type.

I think my ideal world would involve Go providing a map[string]interface{} as part of the http.Request struct in order to implement this behavior, but until we get that, I think Goji's web.C ("the context object") is the next best thing.

There's one other thing pat hacks around: the issue of how to pass bound URL variables to the resulting handler. At first I was a little grossed out at how pat did it, but I've sort of come to terms with it. I still think Goji's way is better, but I don't think it's the reason I wrote (or a reason to use) Goji.


I agree that 'context' and passing this context between middleware is the biggest missing item from the standard http+"pat".

This library (Goji) solves it with a map[string]interface{}. This works, but has the downside that you need to type-assert your values from this map if you want to use them.

I wrote a library a while ago (http://www.github.com/gocraft/web) that uses reflection so that your contexts can be strongly typed. Martini offers a similar approach.


Unless I'm not understanding this correctly, this sounds exactly like how middleware state is implemented in Ruby's Rack and it is one of the biggest warts of Rack that the core team is trying to fix.


Yeah, I looked at gocraft/web for a long time before writing Goji. It's a good library, and I think it does a lot of things right. But at the end of the day, just like my disagreement with Martini, this comes down to a difference in principles. I think there are two theories of Go library design here. One of us has chosen one, one the other, and let me prefix this by saying that I'm not sure the way I chose is correct.

On one hand, a library can allow its users access to typed structs—this is the way gocraft/web does it. The upside here is huge: applications are now type-safe, and can lean on the compiler to enforce their own correctness. The library code that backs this behavior can be arbitrarily gnarly, but you only have to write it once, after which it'll Probably Be Correct (tm).

On the other hand, you can provide a standard "universal" context type like Goji does. Since the library makes no statements about the sorts of fields you have or their types, applications have to do a bunch of shenanigans with type casts in order to extract the typed values they expect. The upside here, however, is the standardization and what that allows. I've found that in most of the web applications I've built, only a small fraction of the middleware is authored by me: I end up pulling in the standard request logging middleware, or the middleware that assigns request IDs, or the middleware that does CSRF protection. There's a huge amount of value in being able to write (read: "let someone else write") those middlewares once and being able to use them in any application. With arbitrary application-provided structs, this isn't possible [1], but if you can instead standardize on types and keys out-of-band, you can mix and match arbitrary middleware with impunity.

This alone is one of those awkward engineering tradeoffs where you're exchanging one nice thing (application-level type safety) for another (standard interface, and hopefully an ecosystem of drop-in middleware), and given just this I'd argue that the ecosystem is probably more important. But the frosting on this particular cake is that, with only marginally more awkwardness, you can actually get application-level type safety too: look at what I did for the request ID middleware that ships with Goji, for instance (https://github.com/zenazn/goji/blob/master/web/middleware/re...). Again, it's not quite as nice as a user-defined struct, but in the grand scheme of tradeoffs and sacrifices, I don't think it's so terribly bad.

So TL;DR, Goji chose standardization and composability in the design of its context object over application type safety, but if you're willing to write a few helper functions I don't think you end up losing the type safety anyways.

[1]: actually, come to think of it, with a lot of reflection and some struct tags you could probably pull it off. (Something like "struct { RequestID string `context:"reqid"`}", and the request ID assigning middleware would hunt around for a string field tagged "reqid"). But it seems like it'd introduce a lot of coupling between middleware and the application which I'm not sure will turn out well—any time a middleware needed a bit of extra storage, every application would have to change. In any event, if someone builds this I'd love to play around with it.


I don't really get it, either.

Generally, what I don't like about web frameworks in Go is that they often don't integrate too well with the existing net/http package. I think we don't need web frameworks, we need web libraries that we can easily plug together as we want without any additional glue.


I agree, I don't see it here, but there are some that have a lot of decent addons, Gorilla being the big one among them.

But yes, short of robust routing, Go handles the micro framework fairly well out of the box.


I started with Gorilla due to its helpers with cookies. Then Martini came along and I had the same opinion as you, but soon realized that there are some nice utilities that help common scenarios.

I always think to myself that those nice utilities that I would have to build each time myself will some day bite me in the butt when they have abstracted too far from the standard http request and response.

I sometimes have to write in C# MVC, which is a poor copy of the Rails framework. It's such a nightmare to do things in a way the framework did not intend. I now remind myself that there are no free gifts when it comes to web frameworks.


This is more or less the philosophy of the Clojure community when it comes to web frameworks. Not sure what you're reasons for using Go are, but if you're open to dynamic typing you would probably appreciate Clojure's emphasis on avoiding this kind of complexity.

Though to be honest I do see a decent amount of value in not rewritng form handling/validations/auth in a bunch of different ways.


I like Clojure a lot, and if I was building a new web app today, I'd strongly consider using it instead of Rails (for what it's worth: Golang is great for JSON web services, but not IMO great for full-features web apps, even if you're using something like Angular and a "single-page" type architecture; you pay a huge price in flexibility for it).

However, to talk myself out of using Rails, I'd need to convince myself that I would (a) be able to use Postgres and (b) would not be writing lots of SQL. I can write SQL, and have been writing it for a long time, and I know that writing and maintaining it slows me down.

There's no good Clojure ORM, is there?


"even if you're using something like Angular and a "single-page" type architecture; you pay a huge price in flexibility for it" Can you elaborate more on that statement? I use angular/golang api and rails/angular both often.


I find the HTML templating options for Golang particularly painful, is the subtext there.

(I like Golang a lot; we did the bulk of microcorruption.com in it. But the web front end, which is a tiny amount of code, that's a Rails app.)


When I wrote a single-page Angular app with a Go backend, we barely did any of the HTML generation on the Go side, basically just enough for AngularJS to take over, which was essentially just one index.html template. I found it worked really well as an AngularJS backend.

I previously did a multi-page using html/template and I found it a bit more painful, especially since at the time there wasn't a ready-to-use solution for dynamic template compilation during development and then static compilation in production. I'm not sure if there is now, but I haven't looked in quite some time.

As for the template html/template language itself I've kind of grown to like it, but maybe that's just me ;)


One solution might be to include a javascript runtime in your app such as monkey (https://github.com/idada/monkey) , load a javascript template framework like underscore.js(http://underscorejs.org/) into the runtime context and create a render function that takes a JSON string and template string as parameters. You can then write the return value back into your "http response".

It's a bit of a hack/convoluted solution but it allows you to make use of a few javascript template libraries which are IMHO more pleasant to work with.


Ahh thanks. Yeah, almost all of the apps I have written lately are a rest/json api service in rails or golang (martini) and the client side code is angular, ios, etc. So I don't care nor use html templating in go.


What did you use for your db backend with Go? Gorp, beedb, database/sql, other? Just curious, as I've been looking at building some api's with go and martini lately.


The short answer is no. There's honeysql which basically lets you express sql in terms of Clojure's datastructures so you can get more reuse than you can with straight SQL strings, but it's not really an ORM as I understand it.


There isn't one cohesive solution.

I imagine most people evolve helper functions around https://github.com/clojure/java.jdbc and then use something like https://github.com/jkk/honeysql or http://sqlkorma.com/ for generating actual select query SQL.



It seems to me that Rails' main advantage over Clojure+microframeworks for web development is the ability to use mature libraries to handle common tasks: user management with devise, for example.


Don't forget it is trivial to call Java libraries from Clojure which for breadth, quality and maturity eclipse all the other platforms.


It is good to have many web microframeworks in a language ecosystem. In Python probably there are a hundred of those. Many of those are not picked up by the community –a natural selection. Only really a few of those survived. It all depends on you if you are going to choose Revel, Martini, Goji or whatever you want. Today, thousands of apps run on web.py, yet most of the source code is untouched last 3-5 years (https://github.com/webpy/webpy/tree/master/web) It's impressive it just works!

Personally, I am looking for frameworks that many people rely on, maintained frequently as needed and works just fine. There could be a +-10% difference on QPS those framework URL routers can handle and render a 'hello world' page.

So this is a nice attempt I would say, looks cleaner than Martini, still supports middlewares. On the other hand, Martini has support to serve static files, logging, panic recovery, which are also good and has a bigger fanboy community around it: https://github.com/go-martini/martini



Perhaps this is not the best place for this question, but as a frequent HN reader, I'm constantly told that Go is great to develop in and very performant. However, it's not clear to me how Go suits a web application with relational data.

From what I've gleaned, an ORM does not make sense in Go, so how would this type of application be approached? Writing a lot of ORM-type boiler-plate? A completely different way? Or is Golang a bad choice for such an application?


I suppose the Go philosophy would discourage developers from tightly coupling Go types and their relational representations through an ORM. The idiomatic way of mapping your objects to a database is by defining a thin interface around a sql.DB, with first-class operations for your concrete types.

    type User struct {
        ID        int
        Permalink string
    }
    
    type Storage sql.DB
Then you can either do

    func (s *Storage) WriteUser(user User) error {
        if _, err := s.Exec(
            "REPLACE INTO users VALUES (?, ?)",
            user.ID,
            user.Permalink,
        ); err != nil {
            return fmt.Errorf("write user failed: %s", err)
        }
        return nil
    }
or

    func (u User) Write(storage Storage) error {
        if _, err := storage.Exec(
            "REPLACE INTO users VALUES (?, ?)",
            u.ID,
            u.Permalink,
        ); err != nil {
            return fmt.Errorf("write user failed: %s", err)
        }
        return nil
    }
It's a bit more laborious in the sense of keystrokes, but it's also more explicit, which is, on balance and over the lifetime of a large software project, a good thing.


Thank you for the detailed answer.

It's a bit more laborious in the sense of keystrokes, but it's also more explicit, which is, on balance and over the lifetime of a large software project, a good thing.

OTOH, it feels like a high development cost to pay, relative to the alternatives out there. You certainly wouldn't use it for an MVP. Also, it would increase the opportunities to introduce bugs into the application (unless one actually feels they can write something better than, say ActiveRecord, from scratch).

I have a side-project in RoR that would benefit greatly from the performance boost provided by Go. However, the idea of writing all of the ORM functionality that ActiveRecord handles for me (not just managing of objects as you've shown, but the relationships between them) is quite daunting.


    > (unless one actually feels they can write something 
    > better than, say ActiveRecord, from scratch).
The point is that ActiveRecord's method of modeling, especially when it comes to dynamically mapping language constructs to a storage layer via SQL, is too implicit. Too costly. Actively harmful! The point is to get developers to stop thinking in terms of ORM abstractions, and start thinking in terms of the actual transforms and manipulations that are occurring.

    > You certainly wouldn't use it for an MVP.
I think you overestimate the cost of pressing buttons on your keyboard.


The point is to get developers to stop thinking in terms of ORM abstractions, and start thinking in terms of the actual transforms and manipulations that are occurring.

Isn't the point also to encourage code re-use and abstraction? I mean, golang has packages for a reason. I suppose my question is more along the lines of whether we'll ever see a package in go that would standardize the data-object divide or whether this will always be a "roll your own" domain?

I think you overestimate the cost of pressing buttons on your keyboard.

Then why don't we "roll your own" for everything?


    > I suppose my question is more along the lines of whether 
    > we'll ever see a package in go that would standardize 
    > the data-object divide
There is no lucid standardization possible for the data-object divide, in Go or any other language. Too much depends on the semantics of the object and data system. Or, rather said: that standardization (that abstraction) is SQL itself.


what sagichmal is basically saying without saying it is go cant do what you want, you just cant write a generic Active Record framework in go,nor a generic data mapper.

the right approach ,when a language cant do what you want, IS code generation with a third party dynamic language.

If you know ruby for instance, ruby excels at code generation,so you could generate go code from a SQL schema or any manifest,then augment it with custom code in another go file that import the generated code.


Ironically, the "Active Record" concept originated in statically-typed languages -- look at the Java example code in the PEAA book, which looks much like the Go example code here.


Yuk, that's just manually doing what ORMs give you automatically; that's not a good thing, you're being a human compiler.


    > you're being a human compiler.
The transformation from "type instance in my language" to "relational data in my storage engine" is absolutely not "compilation". It's a subtle translation of data and grammar, and as the type relationships grow more complex, ORMs fail. The lesson of ActiveRecord, from the perspective of Go, is that ORMs are fundamentally broken abstractions.


Don't be pedantic, my point was clear and you obviously understood it.

As for the lessons of active record, says who?


I do, for one, but don't take my word for it... lots of very intelligent people have problems with ORMs. http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Compute... presents a good summary of the problems involved (forgive the inflammatory title).

Despite its many problems, SQL remains the best way of interacting with a relational database.


> lots of very intelligent people have problems with ORMs.

And lots of very intelligent people like them. Both statements are meaningless appeals to authority.

> Despite its many problems, SQL remains the best way of interacting with a relational database.

That's your opinion, it's certainly not a fact, and just as many disagree as would agree. Secondly, ORM's don't stop you from using SQL where it's beneficial, so ruling out the ORM because you don't like it for some cases is throwing out the baby with the bath water. For standard CRUD operations, ORM's are the best approach by far. Your hand written CRUD operations gain you nothing but extra work.


> And lots of very intelligent people like them. Both statements are meaningless appeals to authority.

That was meant as a preface to a link that goes into much more detail. There are technical reasons why object-oriented programming and the relational model do not map well to each other, the infamous object-relational mismatch.

> That's your opinion, it's certainly not a fact, and just as many disagree as would agree.

I suspect rather more would disagree than would agree with me, if we're appealing to popularity. But given that the wire protocols for most relational databases (yes, there are exceptions) primarily accept SQL, and all their optimizers are SQL-based, I don't think this is actually as contentious a point as you believe it is. It would be different if ORMs, like optimizing compilers, were able to take high-level information about your application and query structure and transform it into better SQL--but that is in fact what modern RDBMSes do themselves. The more information you provide them, the better they are generally able to optimize your query. In my experience, ORMs usually have the opposite effect--they have less specific information about how you want your data to be used than an SQL query would, so they generate SQL in smaller chunks which can't be optimized as effectively. ORMs also rarely support all the features of any but the simplest RDBMSes, which means that you end up having to drop down into SQL in many cases to take advantage of them.

> Secondly, ORM's don't stop you from using SQL where it's beneficial, so ruling out the ORM because you don't like it for some cases is throwing out the baby with the bath water.

Sure. To continue the extremely apt compiler analog, I can also use inline assembly in any C program (not standard C, perhaps :)) for things I can't do in C. In both cases, every time I have to do that, it is a failure of the original language to allow me to do the things I want to do. Any time I have to do that, I also have to independently verify safety, correctness, and platform independence, as well as correctness guarantees from the compiler (such as there are any in C). And it requires domain-specific knowledge in an area that is dramatically outside my comfort zone.

If people were always having to drop in and out of inline assembly in C, it would have failed as a language long ago. The fact that people are still using it is largely testament to the fact that most C instructions (on low optimization levels, anyway) translate fairly straightforwardly to assembly instructions, and can be optimized to incredible extents by the compiler using knowledge of entire functions (or even the whole program). In performance-critical situations it is sometimes possible to do better than the compiler by dropping down into assembly, but the situations where you want to do that are very rare. By contrast, ORMs can't do any of those things--and I find myself dropping down into "inline SQL" so often that it's become my default approach. In sharp contrast to an optimizing compiler, the odds that even an SQL newcomer can produce better-optimized code than the ORM are shockingly high.

This is all not even going into the real reason I've given up on ORMs, which is that they are terrible at guaranteeing consistency of your data in concurrent situations. In part out of deference to less-able databases, many ORMs will use transactions only begrudgingly and are often "unsafe by default" (using low levels of transactional isolation like REPEATABLE READ), which makes dropping into SQL not just a performance concern, but one of data integrity. And if you don't use a very strong, serializable isolation level, you have to worry about dealing properly with deadlocks, livelocks, and other sorts of concurrency failures, which are nearly impossible to reason about unless you explicitly acquire the locks. I can't stress enough how nightmarishly difficult this can be in a large application even without an ORM. Adding an ORM to the equation basically means you spend half your time debugging the SQL, and the other half debugging the ORM. Turning on serializable isolation would help a lot, if you can take the performance hit, but the reality is that for similar reasons to why ORMs can't optimize queries very well, they're also not able to reason effectively about transaction lifetimes. Holding onto transactions longer than necessary is a great way to kill performance without substantially improving reliability. In the meantime, the majority of people who assume ORMs are protecting them from concurrent data access issues are very likely to have subtle, nearly undetectable data races that lead to extremely problematic bugs down the line.

So ORMs provide neither safety, nor speed, nor (IMO) simplify the amount of information you have to keep in your head to reason effectively about your program's behavior (since you have to know SQL anyway) over standard SQL. But, you say,

> For standard CRUD operations, ORM's are the best approach by far. Your hand written CRUD operations gain you nothing but extra work.

To me, this is the strangest idea of all. Standard CRUD operations aren't very verbose in SQL, either, and in a properly normalized database, it's not likely that the majority of the time all you're interested in is dealing with a single row from a single table in a single transaction (more often a view, perhaps, but if you're using views you're already well outside of familiar ORM territory). I've personally found ORM behavior to be precisely what I actually wanted a pretty low percentage of the time, since even the smarter ORMs have a tendency to acquire data I don't need (especially if you have foreign keys defined), make repeated unnecessary query requests, and insist upon jumping through elaborate hoops to enable common query patterns (try referencing a join table in Django that doesn't have a unique ID--or worse, using a table that has a multicolumn primary key). Your ORM only saves you work if you treat the database as a dumb store--a faster filesystem, basically. There are certainly situations that call for dumb storage like that, of course, but in my experience there are not a whole lot of them.

What's funny is that I used to completely agree with you. I saw monstrous SQL queries eating up valuable database and developer resources, surrounded by custom-built frameworks (different ones depending on which developer was working on a portion of the application), with hand-written migrations[1] that were supposedly rerunnable but somehow never were in practice, and my reaction was, "this is insane! None of this logic should be in the database! We should just use an ORM, and rely on the work of much smarter people who have surely reasoned about these problems much longer than we have, figured out best practices for data access, and encoded them into libraries. To do otherwise is the worst kind of not-invented here syndrome and is clearly the result of DBAs clinging desperately to jobs that became irrelevant ten years ago."

The more I learned about databases, though, the more my tune started to change. It turns out that (and yes, this is persistently my opinion :)) ORMs aren't that at all. The best practices for handling and accessing data are encoded into the RDBMSes themselves, and for technical reasons that I've since uncovered (for starters: catalog access), it is nearly impossible for the application to do better without itself becoming the main source of information about the data and metadata of the application. I'm not saying that situations like the above are good--far from it--only that the solution isn't an ORM.

SQL is an ugly, ugly language. It's insanely complex to parse and has baffling type coercion, it repeats the mistakes of past languages in including NULL and handles it in the worst way possible, it lacks (by default) many convenient structural types (but advanced systems like PostgreSQL get around this issue), and the promise of logical independence is often more myth than reality. I would love for someone to come up with a better alternative. But ORMs sure aren't it. They don't solve any of SQL's real problems, or even attempt to, and introduce a whole new set. Their singular virtue is that syntactically they play nicely with your language of choice (if it happens to be object-oriented[2]), and that alone isn't very compelling.

[1] Hand-written migrations are still usually a terrible idea, though RDBMSes with proper transactions can mitigate this somewhat. This is somewhere where I still hold out hope that someone can come up with a much better solution. However, I think good migration frameworks are somewhat orthogonal to the ORM problem, and ORMs can't really help out much besides providing templates for schema diffs, since the hard part of a migration is actually migrating the data.

[2] I think a lot of the problems with ORMs are also problems with object-oriented programming in general. A declarative or functional ORM framework would actually be able to function as a real replacement for SQL, at least in theory. I've investigated such projects with interest, especially initiatives like SPARQL or the long-ongoing DataMapper2 project in Ruby (as well as abandoned good ideas like QUEL). But so far, none of them seem to be able to gain significant traction, which is disappointing but perhaps expected.


ORMs are never the "best" approach to data access. They are a heavyweight, leaky bridge between two very different abstractions (objects and relations).

An individual framework may have the right balance of tradeoffs for one's needs to manage the dynamic complexity of certain object graph update scenarios. But a blind choice of ORM has usually ended in tears on most projects that reach substantial complexity in my experience. Explicit SQL is much more predictable and often just as productive as discovering the voodoo to make your ORM-du-jour dance. The lighter the ORM, the better, in my opinion.

For what it's worth, Golang does have an early ORM of sorts, it's called Gorp. https://github.com/coopernurse/gorp


    > Your hand written CRUD operations gain you nothing but 
    > extra work.
And maintainability. (shrug)


So, getting fairly seriously tangential, what would you recommend for green field devs? Have you tried to skirt the ORM problem entirely and simply go for object stores / NoSQL?


Learning direct SQL would be preferable to doing ORM or NoSQL for a greenfield dev, if the problems they're solving involve managing "transactional" data (like sales / orders).

It's a great fallback skill to have and if you're ever going to use an ORM in anger you'll need to know SQL well anyway.

If they're mostly managing technical data (like clickstreams or logstreams), then, use whatever store makes sense.

SQL will be around for decades to come and at least gives one hope they'll learn something about the relational model which is our only logically grounded approach to managing data integrity.

NoSQL is appropriate when dealing with problems where

(a) have a different natural requirement than general-purpose logical data management, e.g. your problem maps neatly into a log store, document store, search index, flat file, or K/V store and there's no gain in decomposing these structures into relations

(b) no one is going to want to query this data or update these things arbitrarily (famous last words);

(c) require massive scale and continuous availability and thus don't fit with most of today's SQL databases. ... though you'd still be probably better off looking around Github for the various frameworks to help you manage a sharded/replicated MySQL or PostgreSQL setup before jumping into NoSQL.


Well, Martin Fowler has gone on the record saying that Active Record (the design pattern, of which Ruby's ActiveRecord is one implementation) is not well-suited to sophisticated data models, and he tends to prefer the Data Mapper pattern — which similar to what's being described for Go.


That's not what the OP said, he said the lesson was "that ORMs are fundamentally broken abstractions". Fowler does not believe this, and even if he did, his opinion is certainly not a global lesson as if it were now best practice to consider ORMs fundamentally broken.


expect it is quite hard to write a generic data mapper in go as well. something like Hibernate or Doctrine ORM would be nearly impossible to write with go.i'm not talking about the basic features but advanced features of both frameworks.


That is true, but genericity is actually not an inherent attribute of Data Mapper. The key idea is that there are fundamental incompatibilities between two data models (e.g. a structure in your program and a relation in your database), and the translation between the two models is a reified component of the program rather than just being absorbed into the behavior of the object itself as in Active Record.

There are some efforts along these lines, such as gorp (https://github.com/coopernurse/gorp), but I don't think they're all that widely used.


that's not the issue here, wether a pattern is good or not is irrelevant.The question is ,is it possible to write something like ActiveRecord and keep its most advanced features in Go. Right now the lack of genericity in the language makes it a no go.


It's definitely possible but it seems like it's too early for any ORM "best practices". Check out gorp (https://github.com/coopernurse/gorp) and beego orm (https://github.com/astaxie/beego/tree/master/orm) for inspiration.

In our case, we had to write a lot of ORM boilerplate.


There are a number of ORM implementations in Go: http://jmoiron.net/blog/golang-orms/

As you said, Go doesn't lend itself to this like some other languages, but it can work.


Looks very nice. I do appreciate having more clean, well thought out web frameworks in the Go space. Type switches for your handlers is a good way to approach the net/http compatibility.

There are some people that find that Martini is a bit too magical for them, and that is completely okay. It's great to see another minimal framework that will suit their needs.


About this part in README:

> I have very little interest in boosting Goji's router's benchmark scores. There is an obvious solution here--radix trees--and maybe if I get bored I'll implement one for Goji, but I think the API guarantees and conceptual simplicity Goji provides are more important (all routes are attempted, one after another, until a matching route is found). Even if I choose to optimize Goji's router, Goji's routing semantics will not change.

Maybe you can just use HttpRouter without reimplementing it yourself?

https://github.com/julienschmidt/httprouter

> The router is optimized for best performance and a small memory footprint. It scales well even with very long pathes and a large number of routes. A compressing dynamic trie (radix tree) structure is used for efficient matching.

    goji.Get("/hello/:name", hello)

    router := httprouter.New()
    router.GET("/hello/:name", Hello)


Huh. I'm not sure I saw that particular project during my search. It looks neat!

Without having looked in detail at httprouter, I think the most obvious reason it might not be sufficient is that it doesn't support regular expressions. This might not be a dealbreaker, but I'm fond of the occasional regex route, and I'd have to think long and hard about whether it's worth giving up for a faster router. And plus, I'm still not sure router speed actually matters for most applications.

In any event, I do have a long plane trip coming up, and I'm sure Goji will grow itself something at least slightly more efficient than a linear scan then. I'm think Goji's router and middleware stack are already zero-allocation, so it'll just be finding a way to binary search through routes.



If you think you're ready, it might be fun to submit a techempower benchmark.

http://www.techempower.com/benchmarks/


I see why some would call Martini "magic" but I'm not entirely sure I prefer having to deal with a giant `map[string]inteface{}`. What you are really doing is moving the "magic" from the framework and onto the developer (I now have to do type checking and casting).

That said I'm a huge fan of Martini and I actually use codegangsta's Inject in my other projects to manage shared state/resources, so I am heavily partial to it.


Yeah, this is a good question, and the unfortunate answer is that it was an engineering tradeoff. I wrote a pretty long reply to cypriss (https://news.ycombinator.com/item?id=7632956) which I think covers this.


Great job! It definitely feels like a microframework compared to others. I'm glad that there are so many starting points for Go web services available now of varying levels complexity. To me this feels like it fills the void between Gorilla and Revel/Martini/Beego. Also, the code is very well documented and easy to follow.


Every time I see a web framework for Go, I just want to see an example or two of a website developed with it. Does anybody have any solid examples?

Hopefully, like me, some others enjoy exploring existing code as well as reading the examples / docs.


I can't speak for Goji. But one popular open source Martini app is Gogs https://github.com/gogits/gogs


Call it nitpicking, but I would rather not export a symbol 'C' from a library I write. Seriously, is 'Context' that hard to type? And the author seems to be far from lazy, the codebase is nicely and extensively commented (it is a nice read indeed). Apart from this issue, the library seems quite nice.


Do we yet have the Django for Go? Goji is as said a microframework, what I want is an equivalent of Django.


I doubt this will be produced.

Microframeworks written in Go are more likely to replace Django Rest Framework [1] than Django itself

[1] http://www.django-rest-framework.org/


A bit off-topic, but I love the site colors. I'd love that text theme for Atom if it's available?


Orange (keywords): #fd971f

Red (strings): #f24840

Green (variables): #96c22e

White (text): #ffffff

Grey (background): #222222


This was not the feedback I was expecting, but thank you :)

It was based on my terminal color scheme (named "Solarized Darcula"—not sure where I found it) and the way vim happens to color my Go code.


When I read things like "func Get(pattern interface{}, handler interface{})" I start questioning why this whole thing is ever written in Go at all? This kind of ditches half of Go's benefits by moving all type checks to run time.


If Go supported method overloading, you could actually write the types of those functions out. The first one is either a string or a regexp.Regexp, and the second one is one of four variations on an http.Handler, giving a total of 8 varieties of each function.

I decided that the sin of exposing an interface{} as a parameter was less egregious than the sin of multiplying Goji's API surface area by a factor of 8, but you'll be happy to know that passing a value of the wrong type causes the invocation of Get (Post, etc.) to fatally exit immediately. If you're defining all your routes in a single goroutine before calling goji.Serve() (which is probably the most common way to define routes), your application will crash before it even binds to the socket.

So, not quite as good as a guarantee enforced at compile time, but it'll have to do.


Don't get me wrong, I perfectly understand what interface{} is and why someone is tempted to use it. It just kills the type checking and converts your golang code to a compiled python, without all those nice static analysis things. Yes, API would be larger, but it would be compile time type-checked and you wouldn't depend on things like "well, it will most probably crash very early enough".


I totally agree with the parent: far better to have 8 methods doing the same thing with different names and statically-typed parameters, than 1 method taking interface{}s. Strong +1 to a change in API.


Just a quick look at the examples, and I can say that it reminds me very much of Clojure Ring. It provides really small and extensible core, and if the community pick it up and start writing useful middlewares, it can become very useful.


Were you inspired by this ? http://www.cherrypy.org/ The juxtaposition of things and colors and code simplicity looks like so


How does this compare with Martini?

http://martini.codegangsta.io/


I ain't switching from Perl 5.8 to golang until a shared hosting provider becomes available.


Google App Engine


Sorry for my ignorance, how is this different, better than Martini? What is the main goal of creating a new framework (instead of getting the features you are missing implemented in the currently existing ones)?


To be perfectly honest, I'm not sure it is better (I was hoping you would help me decide that!), and I wrote it mostly because every aspiring programmer writes a web framework at some point, and it was time I wrote mine (it was a lot of fun :) ).

But I think there's a good chance it is better.

First, I think one important difference is that Goji isn't full of magical reflection. If Go had support for method overloading, its entire interface is type-safe. In contrast, Martini does a lot of magical object injection, and it's not clear until runtime if your routes will even work, or what they'll even do, or where exactly the memory for them is coming from.

Second, I much prefer Goji's way of defining middleware. To me, middleware is like an onion (just like ogres!): each layer is a wrapper around the previous one. The way you write middleware in net/http is by wrapping the old http.Handler with a new one, and that's how I wanted Goji's middleware to work too. There's no magic "context.Next()", there's no magic dependency injection overrides, it's just http.Handlers all the way down.

Anyways, I'd like to know if you think I'm right: again, I'm really not sure this is actually better than Martini (or $YOUR_FAVORITE_FRAMEWORK), but I think it comes from a slightly different set of principles, and ones that I think are worth considering.


The fact that you're trying to stick with strong typing as much as possible is very appealing to me, and now I'm convinced to try to port my new project from Martini to Goji.

I think this is a good point to differentiate yourself on, and perhaps it should be included in your elevator pitch.


The only reason I don't use Martini is because it uses reflection and there is no way around it. I like your approach.


This is a great explanation. Thank you!


Thank you for the detailed explanation. I think using these slim web frameworks makes it easy to refactor your code or swap out the framework. I agree with some of the points you raised, but at this stage Martini has some very crucial features like model validations and sessions etc. I am pretty sure Goji gets those as time passes. Great work!


> Sorry for my ignorance, how is this different, better than Martini? What is the main goal of creating a new framework (instead of getting the features you are missing implemented in the currently existing ones)?

I truly do not understand this reasoning. Why build anything new at all if you can "improve" on something existing?

(the answers range from "because the existing doesn't facilitate the changes I would like to make" through to "learning experience", and everything in-between).

It is (thankfully) the opposite of this thinking that has given us the myriad of popular and useful web frameworks in other languages (Flask, web.py, Bottle, Sinatra, et. al) that all aim to solve different problems and/or offer differing levels of complexity/kitchen sink.


The author discusses their reasoning and even mentions Martini specifically on the GitHub project page.


fwiw, I prefer Revel better than Martini.


One of the huge benefits of contributing to existing projects is to fill in the missing holes. Off the bat I think goji is likely to be yet another web framework that someone picks up and builds insecure services with. Perhaps the time spent improving Martini would have been better than this?


From the Github page:

Third, Goji is not magic. One of my favorite existing frameworks is Martini, but I rejected it in favor of building Goji because I thought it was too magical. Goji's web package does not use reflection at all, which is not in itself a sign of API quality, but to me at least seems to suggest it.


My comment was a little off hand and out of keeping with our standard, I apologize for that. Let me try to better explain myself:

There is a form of survival of the fittest taking place; new languages need frameworks for people to operate within. Those frameworks spring up, some work many some don't. You consolidate some into one, and then you build that out. It has won. Within it you have aspects that you want to change, so you contribute an alternative, to make those succeed you have to build clean interfaces. Once you reach this point you have a real framework: i.e. something opinionated about the relationship between varied modules.

Rails is, imho, the best example of this - it has had millions of hours of code contributed to it by the OSS world. It has reached maturity to the point where people wish to make it behave differently. They can do this because of that maturity.

When I see things like Goji, my frustrations are not at goji (again, I poorly expressed that in my initial statement - sorry) but rather at the fact that the initial ramp up period as described above is so inefficient. If you consolidate into one sooner then you're in a really good spot. From my POV golang already has a great micro-framework for web services - so why not invest the time in making that better? For example the graceful http server termination aspect (where goji shuts the listen socket down but services the remaining connections), why not contribute that? Why contribute routing?

RE: security. Recently I came across a piece that delved into the depths of why websites are insecure. The author made the compelling case that a great number of reasons for this is that most engineers do not understand web security, and in some ways that is down to the spread in frameworks - some handle security as a first class concern, others do not. The ones that do not are "easier to use", and so proliferate.


What, specifically, about Goji do you see as enabling insecure services in a way that other frameworks protect against?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: