This is how the simplest variant of SGML (or XML) entities have worked since 1986:
<!doctype html [
<!entity e system "e.html">
]>
<html>
<title>Hello, world!</title>
<p>The content of e is: &e;</p>
</html>
HTML was envisioned as an SGML vocabulary from day one. That SGML's document composition and other facilities were used only at authoring time and not directly supported by browsers was merely due to the very early stage of the first browser software ([1]) which directly mentions SGML even, just as HTML specs have presented HTML as a language for authoring and not just delivery since at least version 4.
There really never had been a need or call for browser devs to come up with idiosyncratic and overcomplicated solutions relying on JavaScript for such a simple and uncontroversial facility as a text macro which was part of every document/markup language in existance since the 1960s.
Check your drive anyway. Purchased after that date does not necessarily mean manufactured after that date :)
It sucks that firmware updates used to be a thing to look forward to but now are something to be avoided at all cost. I'd rather buy a second drive if I needed some new feature.
MakeMKV will show you all the relevant drive info when you start it up, including LibreDrive status. Here's my BDR-XS07 for example: https://i.imgur.com/10CGsbm.png
With a combination of MakeMKV, DVDfab Passkey, and a LibreDrive-supporting drive I can rip pretty much anything. Passkey is a driver-level thing like AnyDVD HD. Both of them are available perpetually-licensed but AnyDVD is currently being legaled and is unavailable: https://www.dvdfab.cn/passkey.htm
You can try MakeMKV for free using the beta key posted monthly on their subreddit, but I just went ahead and bought it because it's not that expensive and then I don't have to think about it: https://old.reddit.com/r/makemkv/comments/1jolbsq/the_may_ke...
I'm currently going through and backing up my library with Passkey's “Rip to Image”. Due to the way LibreDrive works, it's common for MakeMKV to be able to make MKVs (lol) directly from a BD/UHD disc in the drive but fail to open a protected ISO of the same title. For this reason I uncheck “Keep Protection” in Passkey for anything AACS (BD, UHD, HD-DVD (yes I have an HD-DVD drive)) so I can run the image through MakeMKV later. I do check “Keep Protection” for DVDs however, because CSS is fully broken and I want to do the most untouched rip possible.
I recommend reading the full paper linked by esafak. For those without the time, the brief summary is that political stress is a multiplication of (1) the likelihood of the general populace to mobilize, (2) the likelihood of elites to mobilize, and (3) financial distress at the national level. The primary drivers of each of these are:
(1) real income (its inverse, actually), % of the population that is urban, and % of people in their 20s. i.e. If real income declines, urban population % goes up, or % of the population that is young increases, the mobilization factor goes up.
(2) real income of elites (inverse, again), and elite competition for government offices. i.e., If incomes of elites go down or competition among elites for government offices goes up, the mobilization factor goes up.
(3) debt to GDP ratio, and distrust in the state. i.e., If debt to GDP goes up or people trust the state less, the financial distress factor goes up.
The author provides a worrying chart showing an increasingly steep spike in the overall political stress level of the US, but it stops at 2013 (when the paper was published). I would argue that the financial distress factor has gotten substantially worse in the intervening 12 years, but the the other two factors may have declined due to the resumption of real income increases starting in 2015.
The causal factors of revolution and civil war are straightforward to propose. The interesting part is the quantitative analysis; the validation of the causal model.
On a fun note, I was reading the Wikipedia article on cliodynamics (the discipline whose name the author coined) and saw that the article drew an apt comparison between cliodynamics and Asimov's psychohistory.
Personally I don't like Mikrotik very much. It's just too easy to turn on some feature that disables offloading. I run Ruckus/Brocade ICX6610 and ICX7150 in my lab currently, though the 6610 uses more power than makes sense.
It's a part of the browser. It's not doing it with Javascript if that's what you're asking. Chrome includes a file with the name widevinecdm.dll or something like that on Windows. No one knows exactly what this file does because it is incredibly obfuscated https://github.com/tomer8007/widevine-l3-decryptor/wiki/Reve.... But presumably that implements this functionality.
As for what Widevine actually does, it just uses a protobuf based protocol to request a decryption key from a license server. License request messages from the client have to be signed with a valid device private key, which are made difficult to extract but some occasionally leak.
In a similar vein, jart's Cosmopolitan libc has a really fun collection of tables that compare various constants across platforms, e.g. syscalls, syscall flags, error numbers, etc. It includes (variants of) Linux, XNU, NT, and the BSDs.
While it is an interesting quirk of history that we mainly think about computability hand-in-hand with the "halting" problem instead of Turing's symbol-printing, there are so many more interesting nuggets in the 1936 paper (like computational universality, the first ever programming bugs, etc). I do think the paper linked gets the nuances of attribution here correct.
I wrote up a little guide to Turing's paper a while back [0] if anyone is interested in reading it but needs help like I did.
On the second point, no. I think we have a preponderance of evidence for Len Sassaman. He also explains why those bitcoin haven't moved, as well as many other peculiar things. This appears to be the most authoritative research that lays everything out: https://arxiv.org/pdf/2206.10257v14.pdf
If anyone actually reads that and comes to a different conclusion, I'd love to hear it and why.
If you want to go down the nowsecure.nl rabbithole (often used as a benchmark for passing bot detection) [1] is a good resource. It includes a few fixes that undetected-chromedriver doesn’t.
My searches for any existing publication of those code snippets didn't shake out, so I waited for the 5GB download of the docker .tar and pulled out the files that ended in .ipg. I'm very cognizant that's not the whole story, but that's what I had the energy to do for now. I really wanted to see the PDF one, since that's actually the heuristic I use for evaluating any such "I can describe binary files" framework because that file format is ... special
because Gist wouldn't let me use "/" in the filenames, I just replaced them with _ after killing the home/opam part; I left the rest so hopefully they'll show up in search results since pldi-ae and IPG are pretty distinct
The author makes an argument against designing new XML languages. I think his arguments are weak. This does not mean I think we should design more XML languages, but that the arguments this particular author brings against it are weak. That having be said, the mid section with the tooling suggestions by use case is neat.
One thing he condemns such endeavors for is that it is unpleasant and somehow "political". I can see what he means, but this has nothing to do with "overdoing the extensibility" of XML. As Aaron Schwartz put it
"Instead of the "let's just build something that works" attitude that made the Web (and the Internet) such a roaring success, they brought the formalizing mindset of mathematicians and the institutional structures of academics and defense contractors. They formed committees to form working groups to write drafts of ontologies that carefully listed (in 100-page Word documents) all possible things in the universe and the various properties they could have, and they spent ours in Talmudic debates over whether a washing machine was a kitchen appliance or a household cleaning device. [https://www.cs.rpi.edu/~hendler/ProgrammableWebSwartz2009.ht...]"
It is true that similar endeavors are prone to looking for an Absolute Cosmic Eternal Perfect Ontological Structure (credit: Lion Kimbro). If you drop that idea in any office, you will get as many proposals for entities as there are anuses, as if anyone is entitled to an ontology.
Don't get me wrong, anyone might be entitled to submit an entity or criticize a hierarchy, but I think this is meaningful mostly in the context of targeted audience research and agile development practices. All in all, I think that the problem here is not with the 'X' in XML, but with poor organization-level practices.
Furthermore, I did follow the link and surveyed the XML languages. I did not see the apparently self-evident truth the writer sees in there. Sure, there are many of them, but how is this even an argument? Some of the listed languages seem quite cool to me, especially the science ones. And the next person might dig the legal ones. If the argument here is that "there are so many of them languages, they just can all be important" (or "real") does not sit well with me. There are tons of different programming languages, web frameworks, linux distributions, not to mention the incomprehensible multitude in other domains, such as car maker models or, well, birds.
It is just simplistic to disparage any number of things because they are too many to make readily sense of, and this is a cognitive stance I can't endorse. Look at Medical Subject Headings, or the Dewey Decimal or the Library of Congress cataloging systems. There is just a ton of things out there and for each one of those, there is a person that has more expertise on than yourself. These taxonomies might be important to them, what are you gonna do? Stop them?
A bird's view exasperation of the sheer number of things is the hallmark of a small town mentality that is untenable for the hacktivist mindset. The response here is, I guess, reusability of existing standards, and agile practices involving the user in the development process. But the author did not bring up any of these.
several things helped to damage XML as the format of choice.
1. mismanagement by w3c of associated standards:
The awful bloated XML Schema spec that can't even validate many types of common XML design patterns, obviously as shown by Schematron it would be better if they had leveraged XPath in making the validation format they pushed on everyone and made people think wow, XML is this big complicated bloated beast, we need something else.
The awful bloated SOAP spec which Don Box once said if only XML schema had existed we wouldn't have had to make SOAP - let me tell you that was the best darn laugh I had in that year! Which of course with the whole Rest movement - very loosely based on a member of the W3C's PhD dissertation - and W3C committed to SOAP that everyone resented it made everyone resent the W3C and XML in turn.
The tying of second versions of successful standards in to the questionable XML Schema spec made members of communities using these successful technologies - XSL-T, XPath - feel that maybe they weren't enjoying the new versions of the tech so much, there was some drop-off and complaints.
The creation of XHTML as an XML dialect did not suit very many people.
2. Continued increase of the Web as platform of choice.
The successful XML technologies were not well suited to making web sites that were not document based. If your site was say a thin navigation structure to allow you to get around a bunch of documents a top level programming language to handle serving documents, transforming documents to XHTML with XSLT, and then a thin layer of JavaScript on top was quite a decent solution.
But XSLT is not really suitable to making all sorts of sites with lots of different data sources being pulled in to build a frontend. So when you have a lot of languages in use what do you do? You drop the language that is least suited to most of your tasks and use some of the other languages to take care of the dropped functionality, thus easing cognitive load.
I'm serious about this, I was very good with XSLT and associated technologies and built many high quality document based websites for large organization, but the XML stack of technologies is sub-par for building most modern websites that often contain multiple app-like functionalities on every page.
I suppose that the programmers and technologists at the W3C did not realize this because they did not build websites, they were more Enterprise applications, data pipelines and many coming from the publishing world.
3. JSON was being pushed by Douglas Crockford. As the E programming language was shutting down https://www.crockford.com/ec/etut.html he started to focus more of his time on JavaScript and arguing for JSON which essentially he identified as existing as a potential data interchange format. As REST started to take away from SOAP and JSON got pushed by someone who did understand web programming the increasing web focused software development environment moved away from SOAP and XML which were seen as being essentially the same to REST and JSON (or really REST-like and JSON-like) because these were seen as being simpler and quicker to iterate with - which is essentially correct.
On the Web, especially web frontend simple wins because frontend development is in many ways more complicated than other forms of development - why so? Because a frontend developer potentially has to handle very many more types of complexity than are generally handled in other programming disciplines - this often leads Frontend developers to cut corners that other disciplines wouldn't so as to cut cognitive load but here I'm definitely getting off the subject - at any rate for the expanding web market XML and its related technologies were a bundle of complexity that could be replaced with a simpler stack, even if it meant that some of the things that stack was good at might be made slightly harder it seemed and probably nearly always was still a significant win.
I love listening to young developers guess at the history of XML, and why it was "complex" (it wasn't), and then turn around an reinvent that wheel, with every bit of complexity that they just said they didn't like... because it's necessary.
So a bit of history from someone who was already developing for over a decade when XML was the new hotness:
The before times were bad. Really bad. Everybody and everything had their own text-based formats.[1] I don't just mean a few minor variants of INI files. I mean wildly different formats in different character encodings, which were literally never provided. Niceties like UTF-8 weren't even dreamt of yet.
Literally every application interpreted their config files differently, generated output logs differently, and spoke "text" over the network or the pipeline differently.
If you need to read, write, send, or receive N different text formats, you needed at least N parsers and N serializers.
Those parsers and serializers didn't exist.
They just didn't. The formats were not formally specified, they were just "whatever some program does"... "on some machine". Yup. They output different text encodings on different machines. Or the same machine even! Seriously, if two users had different regional options, they might not be able to share files generated by the same application on the same box.
Basically, you either had a programming "library" available so that you could completely sidestep the issue and avoid the text, or you'd have to write your own parser, personally, by hand. I loooved the early versions of ANTLR because they made this at least tolerable. Either way, good luck handling all the corner-cases of escaping control characters inside a quoted string that also supports macro escapes, embedded sub-expressions, or whatever. Fun times.
Then XML came along.
It precisely specified the syntax, and there were off-the-shelf parsers and generators for it in multiple programming languages! You could generate an XML file on one platform and read it in a different language on another by including a standardised library that you could just download instead of typing in a parser by hand like an animal. It even specified the text encoding so you wouldn't have to guess.
It was glorious.
Microsoft especially embraced it and to this day you can see a lot of that history in Visual Studio project files, ASP.NET web config files, and the like.
The reason JSON slowly overtook XML is many-fold, but the key reason is simple: It was easier to parse JSON into JavaScript objects in the browser, and the browser was taking off as an application developer platform exponentially. JavaScript programmers outnumbered everyone else combined.
Notably, the early versions of JSON were typically read using just the "eval()" function.[2] It wasn't an encoding per-se, but just a subset of JavaScript. Compared to having to have an XML parser in JavaScript, it was very lightweight. In fact, zero weight, because if JavaScript was available, then by definition, JSON was available.
The timeline is important here. An in-browser XML parser was available before JSON was a thing, but only for IE 5 on Windows. JSON was invented in 2001, and XMLHttpRequest become consistently available in other browsers after 2005 and was only a standard in 2006. Truly universal adoption took a few more years after that.
XML was only "complex" because it's not an object-notation like JSON is. It's a document markup language, much like HTML. Both trace their roots back to SGML, which dates back to 1986. These types of languages were used in places like Boeing for records keeping, such as tracking complex structured and semi-structured information about aircraft parts over decades. That kind of problem has an essential complexity that can't be wished away.
JSON is simpler for data exchange because it maps nicely to how object oriented languages store pure data, but it can't be readily used to represent human-readable documents the way XML can.
The other simplification was that JSON did away with schemas and the like, and was commonly used with dynamic languages. Developers got into the habit of reading JSON by shoving it into an object, and then interpreting it directly without any kind of parsing or decoding layer. This works kinda-sorta in languages like Python or JavaScript, but is horrific when used at scale.
I'm a developer used to simply clicking a button in Visual Studio to have it instantly bulk-generate entire API client libraries from a WSDL XML API schema, documentation and all. So when I hear REST people talk about how much simpler JSON is, I have no idea what they're talking about.
So now, slowly, the wheel is being reinvented to avoid the manual labour of RETS and return to machine automation we had with WS-*. There are JSON API schemas (multiple!), written in JSON (of course), so documentation can't be expressed in-line (because JSON is not a markup language). I'm seeing declarative languages like workflow engines and API management expression written in JSON gibberish now, same as we did with XML twenty years ago.
Mark my words, it's just a matter of time until someone invents JSON namespaces...
[1] Most of the older Linux applications still do, which makes it ever so much fun to robustly modify config files programatically.
[2] Sure, these days JSON is "parsed" even by browsers instead of sent to eval(), for security reasons, but that's not how things started out.
Generally there is misunderstanding of these markups. JSON came along because the JavaScript people found it convenient and more network-efficient (the irony being it's not really). But a million years later followed the schema-validation logic to back-fill deficiencies.
In my opinion, one should go for XML when writing a portable document format of any variety to allow a vast array of schema validators, linters, and so-forth. This makes writing plugins which target a given schema much more portable and easier to write.
It's kind of ridiculous how many times web folks reinvent the wheel. Seriously, get rid of YAML, TOML, JSON. We already had INI, XML/XPATH/XSD.
There's nothing wrong with XML, people are just lazy and so is JSON. It's the lazy, sloppy cousin of XML which was a well-thought-out standard to fill a deficiency in HTML for data.
Honestly though, we should be using something like this:
http://openddl.org/
Your claim is true to a first approximation. But greps are line oriented, and that means there are optimizations that can be done that are hard to do in a general regex library. You can read more about that here: https://blog.burntsushi.net/ripgrep/#anatomy-of-a-grep (greps are more than simple CLI wrappers around a regex engine).
If you read my commentary in the ripgrep discussion above, you'll note that it isn't just about the benchmarks themselves being accurate, but the model they represent. Nevertheless, I linked the hypergrep benchmarks not because of Hyperscan, but because they were done by someone who isn't the author of either ripgrep or ugrep.
Hyperscan also has some preculiarities on how it reports matches. You won't notice it in basic usage, but it will appear when using something like the -o/--only-matching flag. For example, Hyperscan will report matches of a, b and c for the regex \w+, where as a normal grep will just report a match of abc. (And this makes sense given the design and motivation for Hyperscan.) Hypergrep goes to some pain to paper over this, but IIRC the logic is not fully correct. I'm on mobile, otherwise I would link to the reddit thread where I had a convo about this with the hypergrep author.
And so you can do things like disable the online account requirement and even have it automatically make you a local account with a given username as well as a couple other options.
When I see the term "Open Data" I instantly think of open data portals - mostly run by governments around the world. These things have never been healthier: ten years ago they hardly existed, today you can get civic data from local governments all over the place (last time I saw an attempt to count there were over 4,000 of these portals, and that was a few years ago).
This article is about something different: it's about what I guess you could call the "Open APIs" movement. Back in the days of Web 2.0 every service was launching an open API, hoping to harness developer attention to help make the platforms more sticky. Facebook and Twitter both did incredibly well out of this strategy, at least at first.
THOSE APIs are mostly on the way out now. Companies realized that giving away their data for free has a lot of disadvantages.
Maybe you'll get as many answers as there are hackers here but from my own perspective it seems it went the same way as all other communities. Internal strife, odd personalities, personal differences tore them apart slowly.
It's not all bad. In my city it seems the hacker spaces organized by individuals were replaced by maker spaces funded and organized by the city. At least two that I know of, that are now basically free co-working spaces with 3D printers, meeting rooms, free wifi and such.
One is run by the agency that ran the ccTLD back in the early 2000s, called Goto10 in Malmö and Stockholm. Very nice space to work at.
The other is run by the city of Malmö called Stapeln and is in a basement.
Besides those I know of at least two hacker spaces run by individuals that were still going last I checked.
I think at the bare minimum you should strive to be readable, and you can do that with surprisingly few extra lines of CSS and very basic web design principles (avoid using full black or full white backgrounds). CSS is uhh... a choice, but you don't need to know too much in the very beginning to start building up some motivation of making stuff that looks easy on the eyes.
It can be written by hand but that's way too verbose. Pug is a great solution to that problem: it's just HTML but much less verbose. I integrated it with GitHub Pages so pug sources get compiled to HTML and published when commits are pushed. Great experience.
On macOS: Install from emacs-plus in Homebrew
On Linux: Install from your distro’s pkg manager.