What you say is very true. Furthermore, there's no guarantee that code associated with somebody's GitHub account was actually written by that individual. It's useless to gauge the ability of a developer based on code in his or her GitHub account that somebody else wrote.
If that misconception is the "only one thing" you know about getting a certification, then why do you feel it appropriate to comment on the process?
You don't need to buy the books in order to pass the certification exams. Experience is often far more useful. A list of topics covered by the exam is often provided, too. So somebody with experience can often easily supplement their existing knowledge by reading some online articles or documentation, without ever looking at one of the official books.
And one of the core aspects of certifications is that they usually very specifically target a given product or topic. This is one of the things that differentiates them from college degrees and other ways of suggesting qualification. A certification can help an employer gauge a candidate's abilities in a far more specific manner than a Comp. Sci. degree can, for example.
Certification may not always have the value or reliability that it's claimed to have, but let's not misrepresent it or its process out of ignorance, either.
Fair points, perhaps I was a little rash. The reason I feel urked by the idea is that it seems to me that it would lead to a greater lock-in of technologies and products that a business can use. More investment would need to be made in order to support a technology or product and thus would make it harder to switch to something else if requirements changed. I can see the use of having certifications for hiring as well as credibility to clients, but I feel that there's probably a fair amount of danger in this too.
As for the books, that was an off-hand comment, half to illustrate that I am, in fact, not experienced with the world of Microsoft Certifications, and so whatever I say should be taken as comment from a layman :)
That approach could have unintended consequences, though. A department could just cut back on the services they provide, in order to cut back on their spending, to stay well within their budget. If these services are critical, then that may be worse than some wasteful spending, but a higher level of service.
Agreed. Certainly there'd need to be some other metrics to judge a dept on - service feedback, goals hit, timelines, etc, in addition to budget. However, we seem to be so far off the other end of the spectrum with this line of thinking that wasteful spending is encouraged, or one might even say required.
I've done work for state and city agencies over the years and it's generally the same thing. "Well, if our budget is $4m, but we only spend $3.85m this year, we won't be able to get $4m in next year's budget, and we know we'll need it then, so we have to spend the other $150k now so we can get more next year." Again, it simply boggles my mind that a budget process would not only take in to account a department's requests, but also their track record.
I've had to deal with it even with app hosting for some clients. "What do we need?". "Well, right now, we only need one server, but if demand goes up, we'll need 3 servers in 6 months, and we'll need them for about 2 months." "Well, we'll need to order 3 servers and pay for a year to get it in the budget". Huh? Variable pricing has been something foreign to most govt depts I've worked with over the years.
What you propose may sound great and easy in theory, but in practice it's usually a huge disaster.
The truly useful code of the application will quickly become overwhelmed by the code that tries to abstract away the high-level abstractions offered by the frameworks being used. The boundary or interfacing code you mention ends up becoming a custom framework in and of itself, but usually far more limited than the underlying frameworks.
It's pointless to use a framework in the first place in order to reduce the time and effort needed to build a software system, only to immediately try to abstract it away with a bunch of custom code.
As an industry, we've seen these kinds of systems in the Java world for many years now, and they never turn out well. The performance is awful due to the layers upon layers of abstraction. It becomes extremely difficult to partially, never mind fully, comprehend such a system. This in turn makes even simple changes risky and awkward. And in virtually all cases, the abstractions end up being useless, because the underlying framework is never actually changed at any time. And in the rare cases that it is, a huge amount of work is needed anyway because the code abstracting away the framework never does this perfectly.
The cost of implementing and maintaining the type of abstraction that you propose can easily exceed the cost of rebuilding from scratch software systems that are highly tied to one or more frameworks.
Exactly. Abstraction upon abstraction upon abstraction. I still write to just write plain SQL. (They call it "raw" and it comes with warnings!). That's enough abstraction for me :)
We may not use GNOME today, but many of us did happily use it before the GNOME 3 disaster. After that happened, though, we had no choice but to move to other environments.
Some of us even hoped that maybe someday the situation would reverse itself, and GNOME could once again become a viable desktop environment. Unfortunately, it has become clear over time that this is not the case, and likely never will be.
It's disappointing to see a project that was once quite useful, yet still with a lot of potential, be destroyed so quickly and unnecessarily. And it's perfectly acceptable and understandable for us to voice our displeasure with further degradation of what GNOME once stood for.
That's sort of a useless distinction to make, in practice. If jQuery isn't usable as-is, then it could very well be said that this problem is at least partially due to using jQuery.
Having to play games with jQuery to strip out or alter some of its functionality just to get it to appease Mozilla really isn't much different than any other bug that might need to be patched to get jQuery to work in a certain situation.
Can you use jQuery with Firefox OS, yes or no?. The answer is yes. The Firefox OS devs even provided the author with a version of jQuery that works as-is. Thus the title is misleading. Maybe it should say it is not compatible with jQuery Mobile, which would be less surprising, because it is a giant everything and the kitchen sink of a library that more closely resembles jQuery UI than jQuery. I've had trouble with jQuery Mobile and Android in the past.
You're reciting how things are supposed to work, not how they are working. The provided jQuery version did not work. I do not believe you carefully read the entire blog post.
"appeasing Mozilla" is not the reason that trigger these warning or errors. Privileged apps have access to more powerful apis but are also subject to a more stringent CSP (content security policy) to prevent running malicious code that would potentially hurt the user.
If jQuery's build system has options to create a version that is compliant with our CSP, I don't see any reason to be up in arms.
It doesn't, or at least it is not obvious from the documentation how to do it (yet).
As the one person on the list mentioned, the warnings should not be the reason for the rejection because they were kind of false positives, the only problematic thing is that they were.
They are still working out the issues I assume, it is a really young platform yet so it kind of could have been expected. I just wish there was a possibility to talk to the reviewer and ask them more questions.
Indeed I did - thanks. I just remembered it was the version before 2008, as I left the company in the middle of the upgrade to that one.
The columns being renamed on views bit us big time. We had a large ERP package that we'd built numerous custom views on top of. Whenever we applied an update to the ERP package, it'd add columns and all our views would be borked.
That platform already exists. It's the "platform" that naturally happens when there aren't any "app stores" or "app marketplaces", and when devices aren't artificially crippled to prevent users from freely installing software.
It's a bunch of people and organizations around the world offering their software for download over the Internet. It's other people who then possibly pay for and download those apps.
It's distributed, it can optionally include compensation, and how that compensation is delivered is up to the app creator and purchaser.
It may not be as pretty or convenient as what Apple or Google offer, but at least it has the freedom that's desired.
Of course, a package system like dpkg or pkgsrc can be used to help make it far more convenient to find and install software. And websites offering a directory of available software can help with this, too.
All of this already exists. I think that a lot of people have just forgotten about it within the past five or six years.
No I mean the platforms from Apple and Google are too convenient to ignore, which causes a feedback loop that is good for individuals but bad for the group.
I agree that download over the internet is better as in more libre, but then developers and users lose the convenience of having a simple way to pay and a simple CDN and update system that developers don't need to deal with. The users buy less and it makes less sense for developers to distribute software that way.
What makes you think that "no one had objections about Apple and Amazon doing so"? Lots of people did. Lots of people have been quite vocal in their dislike of the so-called "walled garden" practices of various organizations.
Google and Android have generally provided a relatively open third-party alternative to what Apple and others are offering. If the degree of openness is changing, however, then I can see people speaking out against them, too, and possibly looking for alternatives.