Neither the Ars Technica nor the original article really talk about the staining process, reagents often made from oak galls there were brushed on the parchment. I remember when I first saw the same kind of brown stains on a medieval manuscript and had no clue what they were: since the ink was so sloppily applied I first assumed some kind of heat or fire damage (too bad the book I was looking at wasn't written by anyone famous, so it'll probably never be imaged).
The idea of permanently causing damage to a manuscript in the hopes that you can temporarily make out the text seems kind of crazy (and certainly is not a proper archival practice today!), but it happened with some frequency in the mid-nineteenth century. It seems that it actually did make certain inks more visible for a brief time, though for others it didn't work. And of course this was all at the cost of permanently marring the parchment.
Not only manuscripts. Arthur Evans uncovered the palace-city of Knossos, perhaps the oldest settlement in Europe, excavated it, then had a go at repairing it, with reinforced concrete:
For some reason most online publications still think we're in the era of paper journals and stubbornly refuse to corredate their articles with references as if they had strict space constraints
No, they think they’re in the 21st century and think it brings in more money when site visitors click on links that stay on their site than when they click on outgoing links.
> it brings in more money when site visitors click on links that stay on their site than when they click on outgoing links
As trust in journalism declines, I wonder if/when we will start to see a reversal of this?
Providing sources, and bragging about providing sources when other news orgs do not, seems like it would be a great selling point to a lot of people. At the same time, I would also bet that the vast majority of people would never check/click the sources if provided, especially if structured like a bibliography and pushed off to the side. Just the comfort of believing the sources are available would be enough to keep people coming back, meaning journalists would get to have their cake and eat it too.
I think you could technically refer to them as journalists too but I meant the traditional non-independent kind. Honestly the rise of bloggers + youtubers providing higher quality well sourced content than the mainstream is partially why I think this might become a de-facto requirement to compete in the future.
Because those alternatives exist, people who care about that kind of sourcing will select themselves out of the audience of traditional journalism. That will leave that audience with less demand for such rigor.
(At least that's an alternative story you can spin. So we can't say a priori whether your narrative or mine would prevail, at least not without leaving our armchairs.)
A new ranking boost for high reputation outbound links similar to the current boost for high reputation inbound links would encourage citations because editors would encourage the practice from journalists.
Some check might be necessary to make sure the citation fits the context of the article, however current NLP tech can manage that fairly well - and again I vaguely remember that Google has a some kind context check for outbound links too.
I’m doing a course on Alexander the Great at the moment and for a couple of seconds I thought we’d uncovered surviving versions of Ptolemy I Soter’s account (which Arrian used as a primary source). Nevertheless, that Ptolemy was a patron of Euclid, whose work on optics the later Ptolemy built upon. Small world, but with so many things lost to time.
This is about his astronomical Meteoroscope instrument, which is not that interesting.
Most of his astronomical texts are horrible and set back the advance of science only similar to Aristoteles' completely wrong mechanics. His geographical texts also (mostly stolen and wrong), only his mathematical texts are worthwhile.
It’s still interesting to study Ptolemy because of the role he played in the history of science and the scientific method. Rigorous scientific practice isn’t something we walked out of the jungle having mastered, it took centuries to develop after many missteps such as Ptolemy’s.
Furthermore, one of the lessons we learned from the bad assumptions of Ptolemy (geocentrism) is to never assume that we occupy a special position from which we make observations. This holds value even in modern cosmology, since it appears that we’re at the “centre” of the universe due to the cosmic microwave background’s extreme uniformity in every direction. Cosmologists thus assume that we’re not special, so we are unlikely to be at the centre of anything, which leads to the term “observable universe” and the potential for the universe to be much larger than this region from which all light has reached us.
This is getting too far into the weeds on judging Ptolemy's intentions and motivations. It also applies more modern standards of scientific rigour to work that was done before they existed. This is a common anachronism (and fallacy) people make throughout history.
the heliocentric view was known long before Ptolemy
I am aware that Aristarchus of Samos put forth a heliocentric view. Unfortunately for him, it wouldn't be until the development of Kepler's Laws (ca. 1609-1619) that a heliocentric model could be made to yield better predictions (in all cases) than the Ptolemaic model.
Yes, even though Copernicus developed the more modern heliocentric theory (before 1543) he could not reduce the errors in his predictions because he held onto the assumption of perfect circular orbits dating back to Plato. Few astronomers at the time accepted his theory, and even Tycho Brahe (who you cite) constructed his own model which was mathematically equivalent to Copernicus's but placed the Earth at the centre of the system.
> Furthermore, one of the lessons we learned from the bad assumptions of Ptolemy (geocentrism) is to never assume that we occupy a special position from which we make observations.
The geocentric model does not assume “we” are in a special position. The center of the universe would be the center of the earth, not the point of the observer or any human.
(But interestingly, in Dantes Comedy, the devil is placed in the center of the earth and hence the center of the universe.)
In modern astronomy parlance, "we" refers to humans, i.e. earthlings, hence the earth. Placing the earth at the centre of anything in your theory is applying a human-centric bias to your ideas. Astronomers and cosmologists try to avoid that nowadays.
When talking about deep space objects (such as distant galaxies), the centre of the earth vs. the surface is far below the noise floor in terms of distance.
His point was that being at the spatial center does not automatically imply the sort of bias you and others seem to impute. Indeed, it would have been considered idolatrous and prideful to put Man at the center (who should prefer to reign in Hell rather than serve in Heaven?). God is at the center, in the ancient world but especially during the Medieval period. Someone once remarked that the spatial centrality of the earth was commonly understood as a kind of centrality within the lowest order of reality (Man himself is, in Catholic tradition, the lowest of the intellectual beings). This actually sheds some light on why the Incarnation is such a big deal. Compare this with the Jewish and Muslim rejection of the notion that God would lower Himself by entering and uniting with Creation in such an intimate way.
If anything, it is the Enlightenment that dethrones God and installs Man at the center of concern. Modernism is in this sense a kind of worship of Man, and what is worship if not putting something at the center?
I think people attach too much significance to the historical belief in geocentrism. It is a very reasonable default position to take given what is observable. Much of the mythology surrounding heliocentrism and how supposedly theologically disastrous it was is fabrication and Enlightenment propaganda.
> The geocentric model does not assume “we” are in a special position. The center of the universe would be the center of the earth, not the point of the observer or any human.
I don't think anybody was laboring under this miscomprehension, no need to "clarify" for the OP just to introduce your Dante anecdote.
All new ancient sources are "interesting" in some way or other, given the paucity of texts that have survived to the present day. A newly deciphered text about an astronomical instrument will probably result in new information about Graeco-Roman scientific instruments more generally.
Can I offer my opposite opinion here? I think Ptolemy is one of the most important figures in the history of science. Not necessarily at the level of Newton or Kepler, but certainly on par with Tycho Brahe.
Today we know of Ptolemy because of the whole thing with Galileo and the Inquisition and his famous "E pur si muove". And also because of Copernicus. And Giordano Bruno burning on the stake. And since a good superhero story needs a villain, Ptolemy is cast in that role.
First, to disabuse anyone of any idea that Galileo was Batman and Ptolemy the Joker, I will link here the singular and incomparable blog post "The great Ptolemaic smackdown" [1]. I know, I know, it's late where you are, and you have to go to work tomorrow. But this is something that shows up on the internet about once a decade. Go and read it, and call in sick tomorrow. Or just bookmark it, and spend an afternoon over the weekend with a fine glass of wine and a fine piece of science writing.
Back to my argument, which is somewhat orthogonal to the linked blogpost.
Let's go back to the time of Ptolemy, which is also the times of Trajan, Hadrian and Marcus Aurelius. The Roman empire was at the pinnacle of its power. We are in the middle of a calm and prosperous period now known as Pax Romana. Culture is flourishing. It is the apogee of the Roman architectural revolution [1].
Yet science is nothing like the science of the 19th to 21st century. Today the bedrock of science is making falsifiable hypotheses and testing them. We also like when these hypotheses are based on simple laws. Such as Newton's laws.
But back them people just did not have the same concepts. The Greek culture (which continued under the Roman empire) had its shining pearl in Euclid. The neat edifice of axioms/proofs/theorems created by Euclid was a stunning breakthrough. Most likely it had its origins in Plato's axiomatic-like style of philosophy before him.
But the concept of systematic observation was missing. That was Ptolemy's phenomenal contribution. He collected the best astronomical observations until his time, and fit a certain mathematical theory to these observations. The theory (as we now know) was flawed. But the method was a stupendous advance.
Through no fault of Ptolemy, the Roman empire entered a period of decline. For a thousand years no person of the caliber of Ptolemy put the effort to gather more precise astronomical observations, and fit a better theory to them. Instead, people copied over and over his tabulated observations, inserting errors in the process. The first astronomical tables to surpass Ptolemey's ones were the Alfonsine tables, created about 1100 years after him.
After that, we finally enter the modern era with the tables produced by Tycho Brahe and Jonathan Kepler and published under the name as the Rudolphine tables in 1627.
Think about this for a moment: the first improvement over Ptolemy's tables took about one millennium, and the second another half a millennium. After which science entered an exponential explosion that continues to this day.
What happened then, was perfectly natural. If someone sets a record that stands for a thousand years, humans, being what they are, will naturally lionize that person. Eventually his every recorded sentence will acquire the status of an absolute theorem. People will not dare to contradict him. And this can stop science in its tracks.
But is that any of Ptolemy's fault?
Compare Ptolemy with Tycho Brahe, who was followed in quick succession by Kepler and Newton. What if Ptolemy had a younger peer of his caliber, like Brahe fortuitously had Kepler? There is a chance that the world could have had a scientific revolution one thousand years before it actually had it.
The idea of permanently causing damage to a manuscript in the hopes that you can temporarily make out the text seems kind of crazy (and certainly is not a proper archival practice today!), but it happened with some frequency in the mid-nineteenth century. It seems that it actually did make certain inks more visible for a brief time, though for others it didn't work. And of course this was all at the cost of permanently marring the parchment.
Here's a good article on that subject that helped me understand what was going on, which even mentions the same Cardinal Mai who applied reagents to the Ptolemy ms: http://palimpsest.stmarytx.edu/AmbrosianaArchive/ResourceDoc...