Wow. I'm shocked at the disgusting responses I'm reading here. No wonder people get terrible anxiety about releasing their hard work for feedback. Hacker News seems to be a den of vipers, waiting to strike at the tiniest opportunity to nitpick. And then folks have the gall to bicker and argue over whether the project even has fundamental merit, on the very thread that the author tries to show the "community" what he/she has made. I've been working on a project myself, something I'm passionate about and looking forward to my own "Show HN" thread, but this trend of negativity really makes me hesitate.
I think this is a really cool project, and I commend ianstormtaylor for pushing the envelope and advancing the state of the web. Good job!
edit: I understand criticism has its place in Show HN. But for God's sake, I had to scroll _all the way to the bottom_ to find some kind words of encouragement. You folks should really read some Dale Carnegie
Yeah, I find it very disappointing, and it's not just on Show HN threads either. The community here is still pretty intelligent, but I feel like it's become almost unbearably negative in the past year or so. I know people always say things like that about communities, but I really feel it on HN. Not so on Reddit, where people can still be positive and happy (small and large subreddits).
There is a place for criticism, even non-constructive. But HN sometimes is an echo chamber of only criticism and negativity.
> Hacker News seems to be a den of vipers, waiting to strike at the tiniest opportunity to nitpick.
When presented with a sensible, practical and actually innovative idea, I've found the responses to be fairly favorable. With small things that have been done over and over again (worse or better), I think you can expect some comparisons and questions.
I can't decide if people are overly competitive and frightened with the increase in the developer population, or if we're just bitter and cynical towards anything we can't put our name on.
I don't think it's that really. It is hipster to snark and dislike things and hipster is in apparently. That being said, some things should be disliked, this not being one of them.
Why do programmers, in particular, seem to lose respect for others so quickly? Is it that ours is such a technical field that people often value 'straight-talking' at the expense of politeness (see: Linus)? Is it that software is so difficult to do properly that almost everyone becomes embittered on some level? It's such a weird phenomena when you have things like open source which are fundamentally positive, cooperative forces in spirit. The same class of people who want to give their work away for free are the same people who bitterly attack others for doing that very thing.
I know it's really not good. I actually find this project interesting. I think the fundamental idea has a lot of merit, the individual details can be worked on.
This only polyfills some of these features in the most superficial way. To polyfill some of these features according to spec., you need to do it in the browser.
If I load that document in a browser that supports CSS variables, the title will be blue. But if I run it through Myth, it drops the blue rule and makes the title red. This is because CSS variables are inherited throughout the document and can be overridden at any time. The calculated value of the CSS property that uses the variable depends on the document structure.
Likewise with calc() - if you multiply values like in their example, it works, but if you try to add two values of different units (e.g. 2em + 30%), it silently falls back to requiring browser support for calc().
This might be useful in narrow circumstances, but it should have big warning signs because it doesn't come close to being a proper polyfill.
> Myth lets you write pure CSS while still giving you the benefits of tools like LESS and Sass.
Having a "polyfill" is certainly a valid justification. But this doesn't come close to LESS/Sass -- I'd argue that the main feature of those is nested rules, and then mixins.
Variables and calculations are great, but most LESS code I've encountered uses nesting and mixins to a far greater extent. Advertising the project as "the benefits of tools like LESS and Sass" seems misleading, and seems to set up expectations that Myth doesn't fulfill.
Yeah that's definitely true that it doesn't support all of the features of Sass or LESS. But it does support some of the core features that people use everyday, and could replace a lot of the need for preprocessing.
From everything I've seen nesting rules is an anti-pattern. It's useful for writing code as fast as possible, but for any project of reasonable size maintenance is a far bigger worry than LOC per second, and nesting makes for some horrible to maintain CSS. Until we have native extending in CSS you're much better off going the "one-class-per-element" route for maintenance.
As for mixins, I actually think they are an anti-pattern too. They're usually used for two different things. The first is to simply make vendor-prefixing easier, which this library already eliminates. For example the LESS homepage shows mixins being used when really it's just a regular old `border-radius` property. And the second is for extending classes with groups of properties, but mixins are a bad practice there because they just duplicate the code instead of doing any smart inheritance, so you end up with huge code bloat.
The real missing feature from preprocessors in my opinion is the idea of "extending", but I wanted to wait for the CSS spec for extensions to be more thought out before adding it, in case it varies a lot from how preprocessors do it.
---
The reason this came about is because we went back to writing pure CSS at Segment.io (after having used LESS, Stylus and Sass at different points) and realized that we shouldn't need to give up things like variables just because we don't want the extra weight of a new syntax.
If you think nesting rules is an anti-pattern, I feel like you've never worked on a large-scale web project. When you need to ensure that sections of pages are self-contained, and components likewise, you need to start every rule with ".titlebar" or ".large-button", etc. Nesting just makes your CSS much cleaner, by writing ".titlebar" once instead of 30 times (for each of the 30 rules that apply to things inside the title bar), and making your stylesheet more organized and easier to browse. It doesn't mean you need to nest every single element, but it's a fantastic tool for organization, particularly when multiple people are working on a project.
Likewise, I feel like you're missing one of the main points of mixins -- that they allow you to repeat commonly used calculations that may involve multiple properties in complex ways. For example, a mixin I often use lets me create a vertical gradient by specifying a base color and a brightess spread amount, instead of start/end colors. It's incredibly valuable, and it also allows me to automatically support IE and specify a fallback base color. Mixins are hugely valuable for creating classes based on sprite positioning too. Going back to pure CSS without mixins would be a huge step backwards in maintainability for any project of decent size which I've worked on.
Obviously, both nesting and mixins can be abused or used inappropriately, but I'd never call them anti-patterns -- to the contrary, they're essential for front-end development on sites that reach a certain level of complexity, for cleanly factored code.
I don't always agree with Ian's opinions, but he definitely has experience with large scale web projects.
This is a contentious subject, so take everything I say as just, like, my opinion. Using generic class names like `titlebar` is likely to result in namespace collision. I define a project by modules, and use a slightly bastardized version of the block, element, modifier naming scheme to prevent this. It's verbose, but I consider the extra typing a reasonable price to pay for the ease of maintenance. For example: `.mod-modal`, `.mod-modal--title`, `.mod-modal--button_large_danger`. A sparse library of grid and spacing utility classes allow flexible formatting without having to write new definitions for every variation of your modules. Content hooks are provided for edge cases where you really do need to override some module styles for a specific page or design element. I use a `myproj-contentdescriptor` naming scheme. I apply these liberally but rarely define them in the stylesheet. The excessively convoluted naming scheme prevents leakage and interference from 3rd party libraries. It makes for some ugly-ass markup, but this is a pattern I've settled on after 10 years of CSS day job.
Edit: Ian just argued the same case more eloquently. This methodology seems to be gaining traction amongst framework authors over the last two years.
Actually, nesting like that creates unneeded higher specificity. You'll end up with .titlebar .titlebar-button and you'll end up running into race-conditions with specificity and the ordering of your CSS will then become important.
You're often better off just using BEM style selectors and using the selector name for nesting.
.titlebar {}
.titlebar-button {}
As for mixins, yeah, you get some benefit if you're trying to be super tricky. But that button you've created is used once or twice, plus you could just use CSS variables and colour functions.
The question is now whether the added complexity everywhere else is worth that tiny benefit. There are plenty of people who have built large front-end sites and found that Sass/Less in fact make it harder to maintain.
But honestly, this stuff is totally subjective in a lot of cases. Different projects, different tools. If it works for you then stick with it, for other people, Myth and Rework are perfect.
+1 for BEM, OOCSS or whatever each author calls it (Snook has his own name IIRC).
Having written CSS for a decade and decided what semantics CSS had and how it should be structured, I was very surprised when I actually gave BEM/OOCSS/Whatever a go. It works. And I don't like going back now.
I could be wrong, and they both can work if extreme caution is applied. My general take is that they make it way too easy to do the wrong thing though, at very little gain (assuming other parts of your system are setup properly).
---
For nesting, the problem with nesting is that is messed with specificity way too much. Since adding an extra class inside a nested block is trivial, people start adding more and more. But even just 2 classes is already too much in my opinion. Ideally each component would have classes on each of its inner elements, so that the component-level styles always have a specificity of 0010:
And that way all of your selectors are minimally invasive in terms of specificity. The problem with nesting is that you end up with selectors that have a specificity of 0020, which means anyone implements those components needs to be sure to achieve that level of specificity or greater to override them.
But honestly, when working with small components there just isn't that much typing to justify the possible downsides. Sure I have to type out the extra selectors in the beginning a few more times, but its worth it to do without the unintended consequences of the magic.
The one case where nesting really does come in handy is for page-specific or app-specific stylesheet files. Places where you are adding the "glue" CSS that should always be namespaced to avoid collisions.
---
For mixins, I feel like they only seem useful if the components you're using are broken into small enough pieces to begin with. For example, if you want a complex gradient system on your buttons, put that inside your button styles and then don't touch it from anywhere else. If you're constantly needing to use a mixin to re-define those styles, something else is wrong. Because realistically how many different color variations of those buttons do you need to make? It should be somewhat limited, and they should be able to exist in a single file without issue.
Where things really get screwed is if you aren't versioning your mixins (which most people aren't doing). At that point, you're just tightly coupling changes to that mixins across your entire codebase. If you want to change the way the buttons implement the gradient, with the mixin system you go change the mixin, but now you need to visit every other component that uses that mixin and double-check that you didn't break anything.
Instead, if you didn't use a mixin and the component just contained its gradient logic (slightly more verbose maybe, but way more maintainable) then you don't have that problem.
The other solution is to strictly version your mixins using something like Component and maintain your mixins in a separate repository on GitHub that gets versioned, then that also works because each other component can peg the version of the mixin it is using.
---
My big takeaway from CSS though is that none of the solutions for it are nice right now. And most of these problems should disappear (I think) with proper native extension.
You can use LESS's nested syntax in order to generate prefixed CSS classes instead of nested CSS selectors, if you prefer them. That gives you the advantage of prefixed CSS classes, but you don't have to repeat the prefix and you have all classes with the same prefix grouped together, which IMHO makes large projects easier to maintain.
That is pretty cool. I think generally that the components themselves should be more limited in functionality to the point where it's just not that big of a deal to write out `textfield` two more times. As in, I think it's optimizing for a case that isn't that important. If you write a component, the number of times you're going to change the root class name of the component is very small, so it doesn't need quick rename-ability. And then the number of nested elements inside of it should also be small, so you don't have that many gains there either.
Basically it's anti-nesting and proposes being very specific with regards to class naming:
.block{}
.block__element{}
.block--modifier{}
I've experimented a bit with this lately but haven't come to a conclusion yet myself. Yes, it does decrease the chances of accidently styling the wrong elements, but it also makes the html more verbose and can in some cases make it more difficult to implement the website w/jQuery (because of the different classes in use, ie. instead of a .selected class you might have .selected__orange, .selected__apple so to switch the selected item you can't just do an .removeClass('selected')/.addClass('selected)
In this simplified case probably not you're right. But it guards against the more complicated ones, for example our form has two labels (one called a legend goes underneath the control):
Just makes it future-proof for when you want to add other elements that might collide since the number of element types is small.
Generally I try to never select on non-class selectors in component-level CSS. Component-level styles should occupy the middle range of class selectors. And then the user should have control over the element selectors (base styles) and id selectors (page-specific styles) so that they can both cascade and override when they need to.
That way, you don’t mess-up your template/html with classitis. I like to reserve the use of classes for js logic (states) and for semantics (SEO, schema.org, …), thus using the `data-x` attribute to create and select components.
Sure your component’s html/handlebars/whatever template must be kept in sync with its css, and adding elements could totally mess-up things. But I think that’s okay, since changing a component’s template should require a revisit of its stylesheet. What do you think?
Hmm, I use to try that kind of stuff, but it gets kind of unmanageable pretty quickly. And sometimes it just plain breaks down. For example, with the same form field example. We use the idea of `Field-label` and `Field-legend` across multiple field types. Checkboxes end up having the `Field-label` be the second child instead of the first.
Generally trying to tie it to order in the DOM just becomes very fragile, when really a classname is actually more semantic in terms of the meaning sticking with the element and being able to use it anywhere.
I think trying to separate JS hooks from CSS hooks is an anti-pattern actually. Like having `js-` prefixed classes. Because realistically you aren't changing the class names enough for it to be a problem. And if you do change the class names there's a good chance you'd want to change the `js-` prefixed one too and eliminate any benefit they were giving you in the first place.
You might be able to get it to work for simple components though. But I'd be weary since there are only so many HTML different elements, and the special pseudo-selectors can only get you so far.
No worries, I like explaining it as a way for me to think through it myself.
The problem is that if you add "js-" classes that probably means you add the equivalent "js-"-less classes on the element as well. So then you just have duplication with the idea that they are "decoupled". Which might be good if it was true, but realistically if you change the component around to have a different DOM structure you're just going to end up changing both instances of the class anyways.
Really it's just trying to solve a problem that shouldn't be solved. Just agree upon an API (and put more effort into making the API good from the start) and then you don't have the problem to begin with. If you really do need to change the API later (which should happen rarely) then just put in the work to do it. Generally breaking API changes are rare if the component was design properly from the beginning, and if it stays small in scope.
Templates already have too many classes as it is without having to dupe them all :)
Regarding nesting, I've always followed the use-nesting-instead-of-prefixes model, so I might use ".big-button .title" and ".main-article .title" classes in a project, but because each title is nested in a different component or whatever, it's easier to read, less redundant, and more "semantic". A project might easily have 50 different ".title" classes, but you know you don't have to worry about collisions. But that's really just a matter of formatting taste, ultimately.
But you say:
> If you're constantly needing to use a mixin to re-define those styles, something else is wrong
> realistically how many different color variations of those buttons do you need to make?
> if you didn't use a mixin... (slightly more verbose maybe, but way more maintainable)
That's an awful lot of presumption, and it's just totally wrong, at least in my case. To use my gradient example: I probably apply this model to 50 or 100 types of elements across a site that's not even that big -- to buttons, to backgrounds, to panels, etc. because the designer likes subtle background variations -- flat design, but not too flat. So nothing is wrong, there are tons of variations, and using mixins is the just following the "dont-repeat-yourself" model.
Obviously, you could call this a model of "tight coupling", but that's a good thing, not bad -- you define a single model of mixins for your site, just like you have a single global CSS reset. It just makes up for certain common deficiencies in CSS, and helps in the creation of a common visual identity of your site where CSS classes themselves just aren't flexible enough. (There's another whole reason to use mixins used exclusively within specific components, but you seem to agree on that above.)
The "slightly more verbose" alternative you propose would actually be incredibly verbose (maybe 7 CSS properties per element instead of 1, if used on 100 types of elements, meaning 700 lines of code instead of 100), because nothing's being reused. If a new browser comes out which requires a slightly different tweak, you would need to manually fix it in 100 places instead of 1, which is horrible. Mixins simply allow you not to repeat yourself, which should be the goal with all good programming. CSS without mixins is kind of like programming in Python without being able to define functions.
(And asking people to version their mixins makes as much sense as versioning functions in a C++ project because changing a function changes its behavior. I.e., not much with rare exceptions. If you change a mixin you've been using for a while, then it goes without saying you inspect where it's used in the site to make sure it doesn't break things.)
> A project might easily have 50 different ".title" classes, but you know you don't have to worry about collisions.
Not if your UI is of sufficient complexity that you're composing components of other components. Then not only do you have lots of collisions, you have a huge specificity war too. That's why prefixing and single-class-name selectors are generally preferred.
Of course. Nesting is just a tool that helps organize your classes. I've found the most maintainable code comes from a well-defined combination of nesting and prefixes (but I tend to use nesting as a default, and prefixes only when necessary) -- they both have their strengths and weaknesses.
But CSS precedence is always a problem, if it's not specificity then it's ordering instead, the problem doesn't go away by using prefixes. But using the order that rules are defined in can often be more difficult to track/maintain than using specificity to do the same. Good commenting practice is key in both cases. Nesting with mixins certainly isn't always the answer, but it often provides a much cleaner, more maintainable, more intuitive approach to resolving occasional "specificity wars" than not using them.
I feel like a lot of the fear of nesting came about in the era when you couldn't do direct-child selection reliably. I'm pretty sure this is something that was overcome a long time ago now, though. It's a lot easier to limit and control the application possibilities with:
When nesting is used incorrectly it's absolutely an anti-pattern (my first LESS project regrettably mimicked the HTML structure with nesting), but when everything is kept as semantic/modular as possible and nesting is avoided except where obvious (a UL class that styles the children LI/A tags, etc) it can be a great help in keeping your CSS simple and understandable.
While it's more difficult to replicate the same issues in native CSS due to the friction involved of copying/pasting the same elements/classes over and over, I don't think that should be a reason to entirely avoid a tool that can be a great benefit in the long run.
On the other hand, per our email I have to say I'm more and more skeptical of the benefit and long-term maintainability of SCSS @extends and @mixin except in special circumstances.
> just because we don't want the extra weight of a new syntax
Sass defaults to .scss syntax, which is exactly CSS syntax. Okay, exactly a superset. Since it merely adds $variables and other features, you are free to ignore them and just write “pure CSS” but with variables.
I like the idea of retaining preprocessors for nesting and mixins to write good CSS quickly, but also having the postprocessor at the other end to add polyfills where necessary.
This way, I can write SASS to target modern browsers and not worry about backward compatibility so much, because I know the postprocessor is there to catch the older browsers. e.g.: I don't want to worry about specifying a solid block colour for CSS gradients, I just want to specify a gradient.
But yes, I agree the implication that this could replace preprocessors is unrealistic.
When it says "giving you the benefits of tools like LESS and Sass" I think they are referring to how you can postprocess the CSS generate by LESS and Sass
Hmm... I can't quite see how the sentence could be parsed that way.
But even if that is the case, it doesn't make any sense to me, and I can't begin to understand how they could be compatible with each other. How are you going to write "var(...)" or "color(...)" declarations within LESS, that LESS will ignore, just to "pass through" to Myth? And why would you? As far as I can tell, Myth doesn't do anything that LESS doesn't already do, so I don't see why anyone would add Myth on top of LESS.
I don't get it. If you are using less or sass anyway.... why would you use myth as a post-processor, since myth doesn't do anything you couldn't do already in sass or less?
I used to be a big supporter of Sass, but now I prefer the simplicity of CSS. Look at almost any Sass project and it's usually overly complicated. I stopped writing Sass a few months ago and I haven't felt crippled at all.
The complexity of Sass/Less is starting to out-weigh the benefit of their features, for me at least.
That's on the individual project maintainer, though. Any valid CSS3 file is also a valid SCSS file, so you can add enhancements on top of that exactly as much as you want to.
I first read it as - allowing you to use tools such as less/sass in 'conjunction'. I see now that was not really stated like that. But that seems the case anyway. It's compatible with a pipeline using both legacy css and less/sass.
I've avoided the CSS preprocessors for some reason, something about learning a CSS pseudo-language just didn't feel right. The idea behind this however is awesome. You're writing simple, true CSS and it does the annoying work of making it crappy CSS that browsers want. I might actually be able to get behind this.
Well, I don't know how to do nesting right now so there WOULD be something to learn :D
Maybe it was that a clear winner never arose between LESS and SASS or maybe that I always let things play out a bit before jumping onto a trendy technology. I guess I'll have to just get with it since it's not really going away.
I avoided preprocessors for a while too, but I can't imagine not using SASS now. Even if all it had to offer was nesting it would be worth it, but with variables and mixins it really shines.
there's nothing about "true css" that makes it intrinsically superior to the "pseudo-languages" that have to be preprocessed to generate css. the spec merely determines the features browsers have to support, not the manner in which people have to use them.
You all should be aware that it is impossible for CSS preprocessors/postprocessor to fully replicate calc() and var(). For example, you won't be able to do something like calc(100% - 200px) or have scoped variables.
A lot of thin fonts look bad in Windows especially chrome. So you could probably make a safe bet that if a font is thin on OSX then it will be much thinner (sometimes unreadable) on Windows.
If you want to use thin fonts only use them for the titles (but even here it looks like that causes problems).
I think they chose "post-process" based on the following feature:
Taking plain CSS as an input also means you can use Myth to post-process anyone else's CSS, adding the browser support you need, without having to re-write their code in a completely different syntax.
What distinguishes it from LESS or SASS or whatever is that the input has to be valid CSS, as well as the output. Their claim is that it makes your CSS act now like it will in the future once the version of the spec they're targeting has been fully adopted.
To me they chose the wrong word, they are still preprocessing because current css engines don't accept future css. Therefore it must be preprocessed by Myth before its usable in the browser.
That would be a noop with LESS/SASS unless the input CSS was not a "valid" CSS.
This project will "enhance" "normal" CSS to take care of polyfill, which alone is a win. Take any CSS file you got, and it suddenly has all browser prefixes thanks to this tool. This is something LESS/SASS can not do.
The reading I think they are going for is that preprocessors generate CSS from something else, while this uses (the platonic ideal of "true") CSS and generates the shambled code that the browser needs. But I could be way off.
I think the idea is that, instead of writing x other style language (SASS, LESS, etc.) that is turned into css, you write css that turns into different css. This means you can use other SASS/LESS/etc... and myth together.
when I saw "post-processor" I was expecting this to be some kind of browser extension or plugin that post-processes (on the client side) CSS received over the network.
it seems that they have basically just done a pre-processor that accepts css as input and returns css as output. its exactly like every other css pre-processor except that its DSL for pre-processing is a subset of CSS rather than a superset.
Very cool, but once you start working in something like LESS or Sass, it totally changes how you write styling and becomes something more than "CSS with variables". The possibilities they offer are more than features, it changes your entire workflow. Personally, I won't go back.
But besides all that, it's pretty sweet to have something more like "CSS with variables". That can come in super-handy sometimes.
The site looks gorgeous and is perfectly readable in Firefox 25 on Ubuntu.
Also this looks pretty neat; I wasn't super interested in learning to use Sass/LESS or working it into my development cycle, but this looks like a good step towards not having to make much of a change while still reaping some nice benefits.
This looks pretty cool. It should probably explain that it's a static subset of the spec, and not actually a polyfill for the spec itself. Since the spec allows for dynamic, cascading variables, as well as dynamic calculations.
All custom fonts look like a disaster on Chrome in Windows. It's the only browser that uses GDI to render text in Windows where Firefox/IE use DirectDraw. All fonts end up being rendered very thin, and if they're thin to start with, that causes discontinuities in the glyphs. It also affects the color; the rendered color may be significantly different than the text color specified in CSS.
It's been an open bug on Chromium/Chrome for several years.
Is there a workaround I (the user) can employ to make Windows+Chrome work right? I've long since given up on sites to figure out a workaround but would really love to have readable sites.
This is depressing. They closed the comments so they don't have to hear anything from their userbase. If this kind of problems happened in Mac, the bug would have been resolved for years.
They're running Macs or Linux and never see the garbage when they're designing their sites. The poor rendering only occurs when you use GDI to render the text which is only used by Chrome on Windows (and Firefox in Windows XP, but not Vista/7/8).
Definitely going to give this a try. I really appreciate the apparent simplicity and creativity of this tool. It avoids the need to learn the odd syntax of LESS/Sass for those of us who need reliable cross-browser support while providing many (though not all) of the benefits of precompilers.
Man I really hate when the creator of the site expects me to scroll down. I have a 1080p monitor, if I can't see any content at that height I have to assume there isn't any.
Agreed. It seems to be a design trend that's on the upswing lately, and it's frustrating. I encountered a mobile site yesterday that did that... to me it's the equivalent of the "click to enter" home page of bygone days. Uh, no thanks.
I honestly can't tell if your guys are being serious. Click to enter was absurd because they were page loads just for the sake of page loads. Scrolling down is just spacing out your content to get a visual feel. This one in particular is giving a short explanation to let you decide if you want to scroll down first. There's no cost here, just the courtesy of not overwhelming anyone who isn't actually interested in the details.
I don't have a problem with the concept if the fact that I have to scroll is obvious. I opened that page, read the title and "What am I doing here again?" I'm on a Mac so there's no scrollbars. Do you just want me to guess that there's something down there? The least you can do is hint at the content below.
Congratulations on being an ignorant elitist. I have both Mac and PC and have been using both for a very long time. The point of my statement is that whether or not I can see my scrollbar, I shouldn't have to look at the size of the bar to determine the functionality of the site.
I kind of like it. It looks nice, and the the low info density makes reading (relatively) dense subject matter easier. With this design trend you'll probably start assuming that there is room to scroll when there's only a title on the screen.
I had no idea that was even an option until I read your comment. Chrome on OS X shows the v-scroll bar for less than a second, and I missed it. After that, the only way they come back is to scroll. There was no visual indication that it was anything more than a title page with no links (aside from the github link).
I wish there were a revolt against W3C. They have consistently made a mess of everything they touch (and take forever to do it). Why reinvent the wheel yet again with another fuglier syntax? We already have Sass and LESS which are widely used and quite beloved. Just adopt the best of those and save us from yet another "XSL-FO". Please! For God's sake, man!
That's a case of assuming the worst without knowing the decisions that went into the spec. The idea of using a simple `$` syntax for variables came up and was discussed but was eventually rejected. If I recall right it was rejected because they thought they might want to use it for a separate feature in the future. And they thought the function syntax better fit in with the rest of CSS (which uses functions a lot) anyways.
I know I used to think the var syntax was incredibly ugly, but I have to say after using it for a while now, you actually get used to it.
But read through the CSSWG mailing lists and you'll gain a lot more respect back for the decisions they make.
No. The reason sass or less require compiling is to support features that aren't actually a part of the CSS standard, so they need to be compiled down to things that are a part of the css standard.
In his world, the best of those features would be part of the CSS standard and supported by browsers, so you could just give them right to browsers without compiling anything.
Looks a lot like a pre-processor to me, except that its functionality is limited to what can be defined by pure CSS. I'm not sure why you would choose this preprocessor over one with more functionality.
Somebody must have thought it was a good idea to have gone through so much effort, so perhaps I'm missing something.
this is cool, but the problem i have with the variables is that it's based on a suggestion of a spec that mozilla hasn't even finalized... if it changes in the future, this could break
Can Myth postprocess LESS output and guarantee that it'll "just work"? I like the vendor prefix feature (although there's a 'LESS Prefixer' project too).
Yes, I didn't know about it :). I'll probably go with autoprefixer, but with Myth I could also use some of the other features and perhaps aim for a slow transition from LESS to CSS over the coming years.
Nah, I just personally think the "Fork on GitHub" line is wrong because I'm very rarely trying to go from a tool's site directly to the "fork" action. Instead I think something like "Source on GitHub" or "View on GitHub" is more appropriate. And since I just launched it, in this case I thought "Star on GitHub" could work. I don't mind too much really, just anything but forking :)
Ah ok, that makes sense. I think "View the code on Github" is the best way to say it. Star on Github reminded me of those people who bug you for reviews on the app store, which only helps you and not them. View the code is a two-way trade.
I think this is a really cool project, and I commend ianstormtaylor for pushing the envelope and advancing the state of the web. Good job!
edit: I understand criticism has its place in Show HN. But for God's sake, I had to scroll _all the way to the bottom_ to find some kind words of encouragement. You folks should really read some Dale Carnegie