Hacker News new | past | comments | ask | show | jobs | submit login
Microservices for the Grumpy Neckbeard (chrisstucchio.com)
144 points by spindritf on Aug 26, 2014 | hide | past | favorite | 102 comments



As Chris says, in practice microservices are not really much different than using service objects, or Akka actors: They are just a form of enforced encapsulation, which is just happens to bring the advantages and disadvantages of always going over a network.

The disadvantages of always paying for network costs are extremely steep, so the minute you care about latency, either because there's a user on one side, or because every second you wait is more money thrown the way of AWS, making everything a microservice quickly becomes a bad tradeoff for most applications. All the real advantages of microservices only start to pay off at scales that you'll only reach in a few handful of companies.

I am thinking of a small startup, that had a big proponent of microservices as the tech lead, which boldly went all in with this architecture, before any of the real advantages of microservices ever came into play. WHen they went to production the first time, they saw everything was slow, and quickly started to colocate said microservices in the same machines, and made them talk to each other directly, without serialization. In other words, they reimplemented EJB local interfaces. At some point in this process, the microservices champion left.

So now they have hundreds of little faux services, and instead of a monolith, they have to worry about what happens when someone wants to change the contract of one of those services. A ball of yarn, by a different name. Instead of wondering whether the code will compile if they make a change, they wonder whether it will run, in practice.

It sounds to me like they are not in any better a shape than if they had built a monolith anyway, other than the fact that their employees can now claim they have microservice experience.

If we go back to basics, maintainability comes down to getting lucky with your selection of abstractions, and trying to minimize coupling. If your abstractions are wrong, your modularization will be either incorrect or just lacking overall, and no architectural pattern will save you.


I agree with the bulk of your comment, and want to add one thing you didn't mention. Micro services allow you to scale the size of your engineering team. We started with a monolith at my company and as the engineering team grew from 5 to ~50 we started pulling out independent pieces and making them services that a team of ~5 could own, and it has worked out well.


You can do the exact same thing with service objects. You can probably also get away with smaller teams, since most teams don't need to worry about devops.


But doesn't that just put you right back into RPC-land, where you don't even know whether a method call is local or over the network, because the interface is the same?

And if you take the suggested alternative of wrapping the return values in 'Future', then your interface just changed, so now the code that calls your service all needs to be rewritten.

It's pretty well established that service calls are either local or remote, and the service interface should show the difference between the two, but that means means that changing a service from local to remote is always going to break its clients.

That's a pain point, but it's probably better than making every service call remote by default which is what a microservices architecture does.


you can divide a project into modules without using microservices...

but microservices can scale in performance terms, and are the future (or something like it) IMHO - including local many-core.

they also enable an app-like per usage business model for libraries, which could be interesting...


Why do you have to fully decouple if it's not necessary. Create a gem(s), and call it a day.


Yeah, i have asked myself this, and have repeatedly tried it. Somehow it usually doesn't work out as well. With gems it's still a bit too easy to dig into a gem and fiddle with it to get something done quickly when in a hurry breaking the encapsulation. Maybe there are other reasons as well.

Also, with a big org I like having the ability to use different languages and technologies in some small services. Is functional programming all the rage? Well we can experiment with it on this little service need to build without much risk.

Another thing I like is that other teams can leverage my micro service regardless of the languages we are using. With libraries, everyone has to use the same language.


Service architecture should reflect good code architecture. One example of this is hierarchical design: if service A calls service B, B should not call A. There's some extraordinary use cases for violating this within a component, but I haven't seen a good use case for violating this at a library or service level.


Maybe I'm wrong, but I think you're missing it.

Microservices, at least how I understand them, is just functional programming at the O/S level. Where they are located, how they talk to each other, and so forth? That all becomes SysOps stuff. That's good and bad. It's really bad if you don't stay on top of it and don't refactor. You can make the same mess you made in your big monolithic POS, just spread out everywhere.

But to state it that way misses the point. The point is to actively keep the number of microservices small, have tests for each one for compilation, deployment, and production. You should never deploy something that doesn't work in the ecosystem. The development environment should prevent it. You should also have a very good grasp on data flow between microservices and latency issues. Monitoring that should be part of your daily work.

You don't just do microservices because they're cool. You do them because they separate concerns in such a way to be both scalable and configurable without having to touch the IDE. With good safety protocol in place, this decrease both risk and complexity. Without good protocols in place, you'll make a mess no matter what tool you're using.

Look at it this way (put your FP hat on): any program takes input, runs it through some functions, and produces output. If you were debugging, you'd set up breakpoints along the inputs and outputs to chase down errors. Then you'd walk the data flow to see what was going on.

Microservices allow you to do this without touching the code. You would think that this increases network and system instability. It can, but when done well, you get the same thing you had with the big monolithic thing -- only in smaller pieces that you can change and reason about without introducing one of the thousand stupid coding errors that get introduced any time we open the project up and touch it.

Plus you get hot-swapping, easy testing, auto-scaling, and ease of programming. In the modern VM/hosted world, this is a pretty good trade-off.


At my current company, we've been breaking microservices out from the megalith whenever we need scaling. While it makes it a pain to install, that pain made it possible to meet some very steep performance goals. Going from importing a few hundred rows a minute to tens of thousands, etc.


How would the Service Objects approach work with Z-axis scaling?


(now that I understand Z-axis scaling means sharding: whoever thought up that term was on something)...

There's no real problem with sharding using service objects: you parameterize them to determine what shard they are serving.

There are issues with replacing one if it goes down, or adding in a new one, or hot shards vs unused ones, but these are all familiar with any kind of sharding.


What do you mean by z-axis scaling?


I'm guessing it's a joke. Y=vertical scaling, X=horizontal scaling, Z=? scaling


X-axis scaling: running multiple identical copies of an application behind a load balancer

Y-axis scaling: splitting the application with functional decomposition (a server for account info, another for catalog, etc.)

Z-axis scaling: each server runs an identical copy of the application (like X-asis) but each server is responsible for only a subset of the data - some other component of the system is responsible for routing each request to the appropriate server (Ex: sharding)


Why not just call it sharding? There's enough jargon as it is!


Because when you treat those three aspects of scaling as dimensions of a cube, you can visualize what it means to scale on different axes.

See this diagram http://akfpartners.com/techblog/wp-content/uploads/2008/05/a... taken from http://akfpartners.com/techblog/2008/05/08/splitting-applica.... For in-depth discussion, see the book The Art of Scalability.

http://theartofscalability.com/


You get to sound smart when you compel people to ask what jargon-word X means.


Isn't this just Object Oriented API Design 101?

I kept reading, looking for the punch line, but as far as I can tell he's just describing really basic information hiding and interface design.

Maybe it's OOD For Server Programmers? There's nothing wrong with it, it's just surprising to me that it's news.


Do you remember when you were a junior programmer? Did you ever have a crotchety senior programmer, after being shown a webapp's source code, say "this is terrible. It's fine for web pages, but this coding style is shit."

Slowly and surely, we relearned all of the lessons the mainframe designers had learned, and it became heavyweight.

The new generation is working on smaller problems and saying, "This is heavyweight!" And then they go with a minimalist solution, and the problem space grows. They then learn the lessons the crotchety senior programmers had told them.

Microservices is but one more example. Decrease coupling, increase cohesion. And then they'll get into hairier and hairier problems and rediscover the problems the last time someone tried it. Remember when service oriented architecture was the buzzword? :)

To be fair, each generation iterates on the past. I'd rather code today than the 80s. However, each generation, each programmer has to learn the lessons again.

Usually, we learn the hard way.


When I was a "junior programmer" at 14 or so, I thought BASIC was too slow so I learned assembly language and wrote a game in it. And then several more, some professionally.

When I first encountered OOD, I lapped it up and became an acolyte. Then I discovered how OOD didn't actually solve ALL problems well. Now I consider myself post-OOD; I use the parts that are useful when they're useful, and otherwise I use other paradigms. I became a fan of and later gave up on Ruby as a language before Rails existed.

I don't think I've ever been the junior programmer you just described.


The only punch line is "hey microservice hipsters, those uncool Enterprise Java guys already solved this problem."


Hasn't that been the punch line of the last 10 years of tech?

"Oh hey data hipsters, no I'm sorry our SQL data base is 200TB, no we don't have performance problems."

"Oh no we don't need your functional restless framework, we've been using a java framework for like 10 years, it peaks out at 1000 req/s."

"Why would we buy 4 webservers? It has 4 nics, and 8 CPU sockets. Sure it costs 500k each, but then we can keep using our java framework."


Yep, in the last few years I keep thinking, "Maybe it's time to get over my distaste for Java and just use it". They have some kind of type system, lots of pretty good tools (aside from Eclipse) and documentation for even the crappiest Java library is light years ahead of what you get with most JavaScript projects (even API docs for Java libs are easier to navigate and read!)


I find it useful to read articles by someone who will translate from hipster to crotchety-unix-grey-beard for me.


c2wiki does that for you ^-^


I guess. Though those Enterprise Java guys were using concepts developed in the '50s and '60s, well before Java existed. [1]

[1] https://en.wikipedia.org/wiki/Object-oriented_programming#Hi...


Every Java program becomes a bug ridden slow version of half of Common Lisp (implemented in xml)


Completely forgetting that application architecture and design is constrained by time and that those "uncool Entreprise Java guys" actually have much more time to work on "heavy" architectures compared to web services.


How do Service Objects require more time than microservices? If anything I've found they take less time - logically the code is similar to microservices and you need to spend less time on devops.


You're forced to explicitly define the interface. While I think the advantages of an explicit interface are well worth it, I can write a Javascript microservice with no defined interface faster than I can write an interface declaration + implementation in Scala.


We talking java beans. I think I used that first (and last) 15 years ago and it was ugly to work with.


Technically it's EJB (Enterprise Java Beans), JavaBeans is a pattern where you need to adhere to particular coding conventions (getters, setters, naming of said getters and setters).

I also used to work with these a long time ago (EJB 1.1 and 2.0). A lot has changed since then though, and EJB 3 has solved a lot of the pain points (arguably pushed by the success of concepts used in Spring and Hibernate, whose authors served in the JCP for the newer JEE specs).


In the case of EJB and so on, it's not just OO concepts. You also have declarative distributed transactions and nesting of these transactions, role based security with a shared context. I thought the concepts were originally based on CORBA, which is even older.


My thoughts too--I couldn't understand why so many words had to be written to describe an interface and a concrete implementation.


I had this, too, but then when I got to it, I thought the punchline was when the service object was revealed as potentially just a wrapper for a call to a microservice under its hood, and the ensuing warning about 'network partition' (this is modernese for 'network breakage', right? Right.)


I wonder whether all the people talking about microservices have come across the unikernel approach (c.f Mirage OS [1, 2]). It seems like these two ideas, at some level, fit quite well together.

[1] http://queue.acm.org/detail.cfm?id=2566628

[2] http://openmirage.org


I was thinking the same way - taken to the extreme, they both seem to converge.


The scariest thing mentioned here is when Team Email switches from a Service Object to a Micro Service. Actually, it's not that scary as part of email where the user of that object always anticipates potential failure (and probably has next to no recourse if failure occurs).

So instead we could pretend that it's something like MachineLearningProvider. When this converts to a Micro Service its failure space increases by an order of magnitude without expressing it in the interface.

That was the primary problem with RPC—you really don't want to confuse local function calls from remote ones.


This is generally solved with monadic design. Results are returned wrapped in some monad - Future[Result] in this case. The monad generally expresses the failure modes.

When you switch from, e.g. `LocalMachineLearningProvider` to `RemoteMachineLearningProvider`, your interface will change from MachineLearningResult to Future[MachineLearningResult].


Certainly, or even with two layers of futures. Notably, this changes the interface.

I'm not saying it's not solvable, just that it was so silently mentioned in the article. You should have to work to reproof your code after RPCing something.

It can just be scary if you've spent time making it out as though MchineLearningProvider can never fail. Interacting with an outside object can be a painful experience.

This gives you some of the worries of a true distributed system within your monolith.


If only there were a way of transforming F[F[X]] to F[X]. Then we could avoid nested futures!


You're being sarcastic, but I meant to suggest that there would be value in running those layers independently so that you don't combine errors. If you present it as F[X] then you've thrown away potentially interesting information. IO (IO ()) is pretty valuable, for instance.


Just curious, what is the advantage? Near as I can tell EmailProvider will return one of two sorts of errors - either a ServiceError (the microservice was unable to send) or a NetworkError (unable to talk to micro service). These will be represented in the class of the exception the future wraps.

What benefit does two levels of futures provide? I was sarcastic before, but now I'm honestly asking.


The two differences I can think of are

    1. If you don't assume exception throwing (which
       may be valuable) then you can more easily distinguish
       one failure from the other.
    2. It reflects in the type a more accurate representation
       of the effect going on. If such distinctions are
       undesirable it's a trivial `join` to eliminate them
       but if the distinction is never exposed then you can't
       get it back.
So really, my thought is that given greater opportunity to distinguish between end-user meaningful effects it's worth exposing them by default. It should be a deliberate choice to hide them with join.


Tel I have no idea why, but your reply is dead. It is a good answer, however, and provides me with food for thought.


Strange. I see it appearing twice. I will try deleting one.


Especially if it had some nice syntax sugar. Then we could chain together as many as we want!


And then instead of mostly working (when the network isn't down), your code fails to compile and doesn't work at all. And you have to either spend just as much time reworking things to handle the errors in a user-acceptable manner, or take shortcuts and stub out the error handling to either die or fake things and keep going (ie, to mostly work as long as the network isn't down).


> When this converts to a Micro Service its failure space increases by an order of magnitude without expressing it in the interface.

I think you just emphasized a bit that, in principle, should never be allowed to happen. When Team Email switches to a service their interface has changed, period. In some idealized sense the contract they implement is the same, sure, but an interface that allows for the possibility of timeouts, down servers, lost packets, etc. is fundamentally different from one that doesn't, and you can't (or at least shouldn't) use the same code to interact with the two. Ideally, the compiler wouldn't even let you try - and in a good type-safe language, this is accomplished if Team Email swaps out the interface they're implementing when they go service.


Oh totally agreed! I was bothered that the essay emphasized doing this without "changing the interface"! At least not the types.


So you are sugessting better prematurely "optimizing" every call to be remote from the beginnig? Wouldnt anyone wrap remote services behind a facade anyways instead of directly calling a http client? I fail to see how this leads to an interface that expresses the remoteness more in most of the cases.


I'm suggesting that you can't always abstract away things the way you'd like. The idea of how to send an email is highly abstractable, the idea of some other entity doing it leaks and rightly should.


Although generally they are, microservices don't have to be accessed over the network: they could use shared memory or Unix domain sockets, for example. As others have mentioned, JSON-over-HTTP is only one of many options.


I think one of the differences between service objects and services (never mind the micro-service distinction) is the language agnostic nature of services (and the idea that services can be written in the language that best solves their problem).

If you are assuming a fixed language (or at least fixed interoperability) the post rings true. If not, there service objects are clearly not appropriate.


If you were starting a project do you think it would ever be a good idea to write parts of out in different languages? Surely if you think a language is good then it's good enough for most things and the advantages of having a common language (review, experience, transferring devs, cognitive overhead etc) outweigh the benefits of using the perfect language for every task (even if you could get people to agree on what each language is - it will be hard enough to get them to agree on one)


So I have several responses to that.

- Lots and lots of large systems are decomposed across different languages based on the advantages they have. Some parts of the system may be extremely latency sensitive and therefore can't afford GC based languages, other parts may need to interact with existing enterprise systems and therefore are best written in Java or the .NET stack. A very common split is amongst quantitative developers who prefer languages like python vs other developers that prefer something else. So basically yes, there are times that I've worked on systems that were designed from the ground up to use different languages (or should have been). That said, of course you want to make sure you aren't adding complexity for it's own sake and a unified language is usually better.

- A very common use of service architectures is not green field development. It is in migrating from legacy systems or augmenting them. Lots of large web based systems were originally written as monoliths on the LAMP stack or as Perl/CGI. As they grow it becomes obvious that scaling will require different technology choices, but it is almost always a mistake to do a full rewrite. In those cases, language agnosticism can be a real boon as you can ramp up in languages that are more popular at the time of need (or in languages that allow you to attract good developers).


This is entirely dependent on what type of project it is. At most companies, tastes change over time so you end up with a hybrid of application layers built on different languages and technologies. I guess in an ideal world the stack would be as non-diverse as possible, but it never seems to end up that way.


I have come to the conclusion, over many years, that microservices are simply a runtime solution to a compile-time problem. Why can we not develop code in a modular way (to separate concerns & isolate) , but deploy it in a monolithic way (to reduce latency/reliability concerns). This is what we did when OO was fashionable.

I understand that we shouldn't treat RPCs like local calls, but that doesn't mean we cannot do the reverse. If we design services properly we don't need to tie the design and the deployment.

I just can't see the purpose of making something permanently flaky at runtime for the sole purpose of keeping developers on track at design time.


To clarify Microservices please have a read of Martin Folwer's article: http://martinfowler.com/articles/microservices.html

In particular, I like this quote: "The second approach in common use is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don't do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services."


I am still trying to wrap my head around microservers and maybe someone can help me. Say we are building a blog platform.

1. Would it be separated into the following services: user, post, and comments?

2. If I was designing this in flask or django, would each service be running on localhost with different ports, e.g. user is localhost:8080, post is localhost:8081, etc?

For my personal flask project, I am currently following this style http://mattupstate.com/python/2013/06/26/how-i-structure-my-... with service layers.


The point of a microservice based architecture is to allow a large team (15+ people) to work together without stepping on each others toes. The idea is you break the application into smaller services, and each service can be iterated on and deployed by a small group of 1 to 4 people. I cannot see any reason why it would be a good idea to adopt a microservice architecture for personal projects or for a startup with less than 10 people.

I did work on an application suite that included a blogging platform, that aimed for a service based architecture. We had services for: * taking screenshots of the rendered blog post, for use in preview thumbnails * user identity and settings (shared service for the entire suite) * asynchronous task handling * sending emails * adding blog post analytics to the dashboard * all things commenting (including a javascript embed code) * the main application for editing and rendering the blog content * customizing blocks of content based on visitor identity


> 1. Would it be separated into the following services: user, post, and comments?

No; posts and comments would likely be provided by the same service. Users, on the other hand, are likely an abstract idea implemented across several services (an authentication service, a profile service, etc.)

An easy way to think about it: if your business logic needs to do the equivalent of an SQL JOIN on data from both A and B to compute some common query (e.g. displaying a page with a post and comments on it), then A and B likely belong to the same service.

If, on the other hand, you have data/state that can be encapsulated into its own black box that none of the rest of the code needs to know the internal structure of (e.g. user authentication information: password hashes, 2FA token seeds, session IDs, etc.) then that data/state can be isolated to its own service.


Using separate services for posts and comments is incredibly common, though, considering how many sites use a third-party service like Disqus or Facebook to provide comments. If you're expecting to load comments in JavaScript (which is really common these days to reduce load time and the impact of link spamming), implementing comments as a separate service where groups of comments are keyed by origin URL or article ID is a no-brainer.


True. I was more imagining discussion sites like this one, where a post is basically just a special parentless comment that's rendered differently, and all normalized tables that connect to one also connect to the other (e.g. both posts and comments are associated with a user profile, both posts and comments have a point score, etc.).

On a plain blog, where "people who post" and "people who comment" are basically disjoint sets, comments can indeed be separated out into their own service, which may indeed be a gateway implemented by a third-party RPC consumer object.


I think that eventually we will move towards defining the whole system in a high level way that is independent of the languages, whether it is a service object/RPC or microservice, etc. In order to do that you need define the programming languages, databases, web servers, and web services using a common meta-language or interchange format based on some type of knowledge representation. To make things practical there will be the capability to work with the systems in your preferred representation and then translate back into the common metalanguage.


Let's call it the common object request broker architecture or CORBA for short. Someone check if anyone is using that name already.


LOL.

I think the difference between "push a few words onto the stack and jump" and "send a bunch of packets out into the cold, harsh world and pray for a response" is simply too big for a common interface abstraction in many cases.


That's the right context but I think we need to eventually go to a higher level with some type of common bidrectional knowledge representation (http://en.m.wikipedia.org/wiki/Knowledge_representation_and_...) scheme that can underpin not only interfaces but all information system layers and representations.


Corba is an architecture astronauts wet Dream, to go any higher would result in negative vaccum.


Is this something written in the node.js language or the Rails language?


Even better, it's a language called beans. None of the kids today know about it so you'll have massive street cred.


I think you're being too subtle here.


This sounds a little like CORBA to me. While I love the idea of a network of software components providing services that you can discover and use without knowing about the implementation, it didn't really take off. HTTP and JSON is a lot closer to providing that goal.


Isn't this the goal of protocol buffers?


No. Protocol buffers are about using a single protocol for all your (micro)services.

Ilaksh is describing something closer to the goal of akka or erlang - you define the logic of your program, but it can be run as a distributed system or monolithically without changing your code.


Makes sense, protobufs (or similar) are just one part of this puzzle.


Certainly there are solid points here, but he doesn't really address the ops side of the equation. I wouldn't write a bunch of microservices on day 1 at a startup with no traffic. I have no doubt it is more complex than a monolith for development & operations. I have no doubt that it introduces a bunch of extra network related overhead..

But what happens when you have serious traffic? One of the main benefits of microservices is splitting the system up for deployment & provisioning. One particular part of the system might be very memory heavy, you can deploy it on a cluster of machines that are spec'ed appropriately. One part of the system might have a relatively constant workload so you can put it on dedicated hardware, while other parts might be cyclical and you can spin up and down instances to match traffic. You can deploy particular components independently which makes releases a much smaller affair.

These things are not impossible with monoliths, but they are easier with microservices. So if the bigger pain point is deployment, scale, etc and not writing new code, then microservices might be a good choice.


In many cases you can do everything you just described by sharding the monolith. Instead of putting 100% of an EmailService on 1 box, put 10% of it on 10 boxes which are also running a web service 10%, an auth service 10%, etc. This will give you logical separation and reduce latency.

Obviously I'm not advocating making a postgres (lots of seeks) server share disks with a hadoop server (lots of spinning). That's silly. I'm advocating sticking with service objects until you have a good reason not to.


Exactly, that seems to be the point of the article. Starting with micro services is plain old over-engineering (reminds me of the original J2EE which I managed to avoid) but using a service object still leaves you free to distribute if and when you need to.


> "exposed via a json-over-http protocol"

Sigh. The use of json format is not necessary or sufficient make the service "micro".

There's this thing called "content-negotiation". IMHO, Any http web service framework worth using just does it. This means that the "xml or json" issues are fixed before they start. Unless you really want to have that trouble later.


Agreed.

I am currently working on a MicroServices based system that exchanges Google Protocol buffers over Tibco EMS.

Key ingredients of MicroServices are simply finely granular distributed components.


He says that it is an ill-defined term then goes on with So for the purposes of this article he will use json-over-http.

The actual method (json-over-http-, thrift-over-infiniband, best practices regarding conten-negotiation) is not the point of the article.

So to keep things simple for the purposes of an article he uses probably the most common method; is that really cause for a self-aggrandizing snarky comment?


> is that really cause for a self-aggrandizing snarky comment?

I think it is worth addressing this. In my day job we are dealing with the ongoing effects of not doing http content negotiation a while back, and an article that perpetuates bad practices (even by the omission of simply failing to mention anything else because it's not the point of the article) is not helpful.


Regardless of how the code is written, the 'at scale' system will need to physically resemble the microservice model. Once you have tens, hundreds, or more servers you are forced to group things by service. The alternative would cause massive data consistency delays from the sharding required to reduce network traffic.


It's worth noting that even if all you have is a bunch of SOs floating around in the same "monolith" you still have a "distributed system" of a kind. You (probably) will never have to worry about partitions, but your state space is still going to be the multiple of all of your independent stateful threads.

Obviously the solution would be to ensure that most SOs don't actually maintain much internal state to the greatest degree they can. This can be partially solved by abstracting out certain stateful primitives into other service objects—achieve this with dependency injection, perhaps.


Not to mention that services can participate in transactions and rollbacks a lot easier. Doing that with web services involves getting a ton of wacky protocols like WS-AtomicTransaction working, which many back ends simply don't support or don't support well.


The fact that something called WS-AtomicTransaction even exists is terrifying. I thank $diety I don't have to deal with that sort of stack.


Microservices as described here sound like a lot like Actors to me.


Neckbeard.

I understand the semantic emphasis (and appeal) of 'neckbeard'/'graybeard' in the IT subculture is patently on expertise/oldschoolness, not on gender at all --hell, I've used that noun with gusto myself with barely any thought about the person's gender.

Yet, given the (by no means conclusive, but increasingly socially accepted) evidence that using implicitly-gendered words like that can be, at best, tacitly exclusionary to some degree --like equating 'balls' with courage/determination, or refering to 'man' as humankind--; perhaps one should, at the very least, pause and reflect on whether the amusingness of 'neckbeard' is, in this particular context, worth its use.


It's an insult, so it's sort-of exclusionary by default. It refers to those nerdy guys who don't know how to dress themselves, compulsively collect arcane knowledge and probably smell bad.

Maybe we ought to come up with a female stereotype to match, or a genderless one. Or maybe this will do just fine - you don't want to be a 'neckbeard' regardless of gender, it's not a good thing.


There is a term for the female equivalent, namely the elusive 'legbeard'.


Let's not deprive women of their sacrosanct right to a fedora.


I think it's a little different when the word (i.e. neckbeard) is pejorative.


Next step: mixed gender beauty pageants.


Seriously? I'll put a content warning on my blog next time: "warning, may offend women's delicate sensitivities."


> Seriously? I'll put a content warning on my blog next time: "warning, may offend women's delicate sensitivities."

Wow, you don't get it at all. You are persisting a stereotype that is exclusionary and your reaction to someone bringing it up politely is to throw it in their face with explicitly sexist sarcasm. Nice job, and you have attached your real name to it all, Chris Stucchio. That alone might be reason to have a different, more professional tone.


I'm not afraid of the internet bullies. I'm already out there so doxxing doesn't scare me.

If you want to make a moral case why I should change the title, do it. If you are persuasive I'll change it. Appeals to inconclusive but "socially accepted" evidence are, however, ridiculous.


I don't particular care for the title, however it's the "warning, may offend women's delicate sensitivities." that is incredibly offensive. Maybe you want to go back and time and not write it.


Offensive to who? I'm mocking gone35 for asserting women are delicate and must be protected from the word neckbeard. Offensive to white knights? I have no problem with that.

I stand by the comments I've written here and would not change them if the edit button were still available.


Yes you're mocking someone who patiently explained an important point to you. If gone35 had talked about anything else in the same courteous tone would it have been OK to mock him?

It's very simple, just don't mix professional topics and gender specific terms.

It's otherwise a really good blog post and discussion, I'm actually sorry for participating in disrupting it.


You feel it is simple, but gone35 gave no argument for it. Neither did you. Just a weak attempt at bullying ("your name is on it chris stucchio").

Make a good argument and identify your axioms (fairly crucial) and I'll give a more intellectual response. Simply tell me to change my style for "socially accepted" but "questionable" reasons and I feel quite justified in ignoring and mocking you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: