I would add most companies do not have the product/engineering/ux talent to approach this problem. Stripe is in the unique position that its talent can execute on several internal “projects” that each individually could justify an entire company to build. Most founders / leaders care deeply about their enriching their teams’ camaraderie and collaboration -- even if only from a cynical rational perspective of onboarding, productivity and retention efficiency (though I would argue from talking to many of them there is also a benevolent side to that). It is an hard problem to get right and Home nails several important details.
Disclosure: I am a founder of https://slab.com that is also addressing this as a scalable SaaS solution.
I'd suggest that if you are building a product like this, it really needs to work on-prem. Many companies will not host this kind of critical information outside their own network.
Seems like premature optimization. One can build a huge product based on the companies that don't mind, and adapt for on-prem later depending on feedback.
You can often tell when a product has been changed to do on-prem as well as the reverse. Those traces of transition can be painful for the developers and the customers. Like the other person mentioned there's a lot you probably won't consider in terms of features or compliance if you're not building for both.
There's also ops and support. Building something meant for a dedicated team to keep running is way different from something a team will do ops for as one of many things. A complex architecture (even if contained in a single VM) requires a lot more support from more capable support people on the developer's side as well.
You have to think through a lot there. It's not something to take on without careful deliberation but if you're sure you'll eventually need to do both you should strongly consider it. If nothing else make it part of every design decision and pay attention to operational overhead by your own teams because those problems might end up being customer problems.
Disclosure: I am the founder of a company that aims to solve this first documentation case you pose. But the process of validating this problem and get early feedback on our solution, I interviewed to dozens of companies to learn about their tools and processes, and hopefully some of that can be helpful here.
The information split is very much as you describe between canonical and ephemeral. For the first case, the defacto tool for medium size companies (25-1000) is Confluence (large companies lean towards a custom intranet). Confluence is accessible to the entire company, not just engineers with git and markdown knowhow, it is also well established, very reasonably priced and can be deployed on cloud or on premise. There are drawbacks such poor search and complex organization that makes it hard to find anything, but there is not a clearly better solution at the moment, though there are a few startups like mine trying to change that.
For the second case, companies have tried to deploy an internal StackOverflow / Quora, some homegrown or using open source tool but unless tied to a specific workflow (like all hands Q/A), they are eventually abandoned. The issue is a lot of duplication of content that also changes very often. So long term storage is not as valuable as just removing the initial friction and what seems to work best is an ephemeral solution like a dedicated Slack/Hipchat/Teams/Mattermost/Gitter/Zulip channel.
The company is called Slab (somewhat of a double reference to a thing you can write on and slab serif fonts). It is current in private beta but if you would like to take an early look please find my email in my profile and I would be happy to share with HN.
We is much simpler and easy to use and plays well with the non-Atlasssian ecosystem (as well as with JIRA). In both these regards we like to think of ourselves to Confluence as Slack is to Hipchat.
I’m the author of Quill and I will attribute my decision to David Greenspan of Etherpad. Not sure if they are the first but it certainly predates all the examples I know of.
Quill is now a very popular project on Github, in the top 250 of all projects. A lot of people want to add their two cents to popular project and unfortunately the volume is far too high for a) me to respond to every single attempt at adding said cents and b) Github Issues's linear commenting UI makes all attempts at adding two cents confusing and cluttering. With this I prioritize the question that was asked, and specifically the concerns the OP raises.
I have repeated many times that the removal of getHTML was because it was a passthrough function that added no additional value to Quill. You can look at the code history and the implementation of getHTML to confirm this fact. It has nothing to do with the desire to remove functionality as no functionality was removed.
I'm confused why Delta is a bad name, since the word delta means change, variation or difference and that's seems to describe what it is: a change to Quill's content. Data source on the other hand is unspecific and even misleading. Naming is hard though and other readers can find out what Deltas are here https://quilljs.com/docs/delta/ and decide for themselves.
Tables was reported as a feature request by me early in the project life because it is a known feature of Word but only years later did users indicated strong interest in it in Sept 2016. I believe in community contribution to open source so I decided to give the community a chance to build it so I wrote offered detailed implementation guidance: https://github.com/quilljs/quill/issues/117#issuecomment-244.... The fact that this user had seemingly urgent need for it and was employed by a large public company with resources also contributed to this decision. Valuable exchanges were had in the Issue and multiple users has fully implemented it for their companies to varying levels of feature richness depending on their requirements. An official canonical implementation is coming in 2.0: https://medium.com/@jhchen/the-state-of-quill-and-2-0-fb38db....
Author of Quill here. Interested in hearing Marjin’s thoughts but here are some of my main observation is at a high level Prosemirror is much more willing than Quill to sacrifice simplicity for power. This value difference manifests in the target audience, architecture and API design:
Quill can be used for the get going quickly drop in use case. Prosemirror specifically warns against this: “If you're looking for a simple drop-in rich text editor component, ProseMirror is probably not what you need. (We do hope that such components will be built on top of it.) The library is optimized for demanding, highly-integrated use cases, at the cost of simplicity.”
Prosemirror’s schema, as documented, is more flexible than Quill’s. Prosemirror appears to allow anything, whereas Quill imposes some constraints. For example Quill requires all nodes to either be a leaf and cannot have children or a container and must at least one child. There cannot be a node that can optionally have children as is allowed in Prosemirror. In my experience the constraints Quill imposes lead to a more consistent and bug free experience across browsers. I will be curious to try out the edge cases I have encountered at this new 1.0 Prosemirror to see if it handles them the way an end user typist would expect. If Quill can benefit from a shift in the flexibility in its schema, it will do so.
Quill is far more battle tested. Slack, Salesforce, LinkedIn, Intuit and many others are using Quill in their main user-facing production products, not an internal employee only tool. Prosemirror has a great start with the NY Times but there is a large difference in adoption at the moment.
Sure. I think it's fair to say that ProseMirror is a more ambitious project, reaching for features that aren't part of (even the new crop of) existing libraries.
* Firstly, the schema feature. ProseMirror's content expressions [1] are a regular language that can be used to describe a sequence of child nodes. The editor will make sure the content of the node always matches this expression. This allows significantly more interesting things than Quill's array of allowed children—i.e. "heading block* section*" to say that a given node must first contain a heading, then any number of blocks (say, paragraph, list, figure, aside, etc), then any number of subsections. (HN's half assed markup doesn't seem to allow escapes on asterisks, so I used *'s)
* Quill uses imperative data structures and events throughout. I've really become convinced of the power of functional Redux-style architectures for this kind of component—I've found it makes it much easier to write bullet-proof extensions. (This may be a matter of taste.)
* Table support doesn't appear to exist in Quill yet. A table module with features like colspan/rowspan and cell selections has been written as a relatively small plugin [2] for ProseMirror.
* Exposing all the system interals (and designing them in a way that makes them usable), rather than hiding stuff behind a small API, means that people can do really advanced things as extensions. This was a lot of work, and wasn't initially planned, but the kind of users we're aiming for require it.
* As for real-world use, yeah, we've just released 1.0, your project is older. Still, our users (including Atlassian, which has already rolled out ProseMirror in user-facing products) have been doing very advanced stuff already, and my ten years of experience with CodeMirror mean that I more or less know what I'm going to run into.
(Also your website claims, wrt to ProseMirror: "Quill’s architecture is more modular, allowing for easier customization of internals. Core modules that handle basic functionality like copy/paste and undo/redo can be swapped out in Quill." I think that's a mischaracterization.)
Also, ProseMirror is definitely seeing wider adoption than the parent implies. Even from this thread alone, we're seeing Atlassian and startups pop-up attesting to its good design.
Your points are fundamentally correct, and I would argue that your parent is attempting to limit the potential of ProseMirror, in order to preserve the need for Quill.
The issue isn't so much about room as it is about bandwidth. Specifically, mental bandwidth.
There use to be loads and loads of JS frameworks of dubious merit and with equally dubious developers. Now there's generally an agreed upon few (Angular 2.0, React, Vue) that are pretty solid and seem to be helmed by mostly sensible people.
Unfortunately, we're still in the early stages of open-source WYSWIG/WYSWIM;. By the later stages, I hope we can thin out the technically unsound editors with insecure authors.
Creativity and rationality are conventionally seen to be at odds than they actually are. But often something creative and innovative to one person is boring and logical to another, particularly when the former is not versed in the latter's domain.
For example, many people versed in networking were using the early internet to make voice communications circumventing long distance telecom bills. It just seems obvious to them then (and many more people now) that if you can send data through the internet, why not encode audio in that data?
To me setting up a server/installation and maintenance is in the chore category of work so 1hr setup and say 15/min/month maintenance ~= 3 hrs/yr. Good programmer contracts can command $500/hr and the work might actually be fulfilling. 4hrs*$500 = $2000 pays for years of hosting.
Also you are not going to hit the top of HN on a regular basis as a casual blogger. Look at real data like your blog's historical traffic in the last year to get an accurate picture of the costs.
One should definetely consider the opportunity cost of taking matters into their own hands. However, I think the example is way over-the-top because of some reasons (most already stated by sibling comments):
* Not sure how many programming contracts can sustain a $500 hourly rate for long. If you can charge that, but can realistically only book 10% of the 'available' time, then your calculation should really be using $50, not $500.
* Example makes sense only if you're 100% booked all the time, with no spare hour here and there. If you have any sort of spare whatsoever, you can sell your unused production capacity to yourself at cost price.
* Even if you're 100% taken, not all consultant time is billable time. So that will already reduce your estimate by about 50% or so alone.
* Managing stuff takes time (and is a common full-time profession). Your example didn't account the time you will take setting up whichever third party solution you choose. Unless you have hired someone who you can just take five seconds to say 'get me set up on X and tell me when you're done' (and can be trusted to do it right without taking time to supervise/course-correct), you will perform work in setting up the platform. This work may very well take a non-insignificant fraction of the time you would take by not outsourcing, which you didn't consider in your calculation.
You may very well really hate setting up servers, and gladly pay a large sum to not do it. But then you're really paying to avoid pain, and the whole opportunity cost argument you made may be a rationalization.
>To me setting up a server/installation and maintenance is in the chore category of work so 1hr setup and say 15/min/month maintenance ~= 3 hrs/yr. Good programmer contracts can command $500/hr and the work might actually be fulfilling. 4hrs$500 = $2000 pays for years of hosting.*
The vast majority of programmers in the world (and even in the US) will never see anything $500 for 4 hours of work () (and if they do it would be some one-off lucky gig), so that's a moot point.
Ok I know this is possible, but how common is this really? I guess this might be a little bit more typical for very short contracts, like one-off work, but still.
Is that $500/h freelancer/contractor fee? Because that seems absurdly high for someone on a payroll no matter how much of a rockstar they are.
If so, then you have to factor in all the non-billable hours you spend getting clients/contracts, advertising, and general housekeeping required to run a company. Then stuff like health care, car, phone, computer, electricity, office and other expenses.
Just saying that you can't just say: "my time is worth $500/hr" if that's what you bill for 4 hours a day for a client for few weeks.
As an open source maintainer I used to have the impulse of asking for tiny changes before merging such as what this post is suggesting. The alternative is just accepting the change and immediately "fixing" it for you.
The main reason I did this was to give contributors "credit" for their work. I wanted brilliant the lines of code you be attributed to you, not to me who accepted your PR but added a semicolon to the end to make it consistent with the coding style. But it seems people don't mind the core maintainers being janitors for them, and if they continue to contribute, following by example seems to work be better than being asked to on their first PR.
The "cost" of this is delay, but more importantly I believe it dampens the intrinsic rewards that go with contributing to open source through Pull Requests. Specifically a chore component is introduced and depending on communications you may also feel reprimanded for opening an unpolished PR in the first place.
What Danger does seem to do well is softening this communication portion for required steps. I believe the list is short (sign CLA, pass CI comes to mind) but I would caution and discourage other maintainers from using Danger to enforce an arbitrary rite of passage.
Writing code that meets code-style expectations isn't just arbitrary, though - it's exactly what your brilliant contributor will need to get in the habit of, if he/she wants to become a brilliant maintainer in the future. As you mentioned, the form of communications matters here: A friendly bot that describes where linting failed and what needs to be done to pass CI, plus a brief "Thanks, this feature looks really interesting! Once it passes Danger I'm happy to merge!" from the maintainer will do the trick. There's no need for the core maintainer to be a janitor at all, just an encouraging voice. And that's what we love about open source in the first place, right?
Sometimes I interactively rebase the contributed PR commits to be exactly as I believe they should be while keeping the original author. This way when the same person open a new PR will be able to guess the required changes.
Why is optimizing for SSDs nonsense? Empirically this seems correct since RethinkDB themselves pivoted away from this but curious about the technical explanation.
coffeemug explained it himself in a podcast where he was interviewed about Rethinkdb (I don't remember the name of the podcast). Basically it boiled down to the fact that all existing databases (at that time) could be tuned with very little effort to take advantage of SSDs so that differentiating factor flew through the window
It's the same storage backend (well, modified quite a bit) and optimizing for SSD's isn't really the goal anymore, especially since things are now stored in a file and not the whole block device, and O_DIRECT is turned off.
In theory source maps is the answer but personally this has only worked some of the time. When it does work though it is magical and exactly answers your question of debugging in the browser at the source level.
Disclosure: I am a founder of https://slab.com that is also addressing this as a scalable SaaS solution.