There are countless repeatable psi experiments that show unusual deviations from probability, but very few that have been conducted with a large number of viewers by institutions. My favorite is the Ganzfeld experiment:
Unfortunately the more it's replicated, the smaller the deviation seems to become. But if there is a deviation above random, say 1%, then we could use a large number of viewers and an error correction coding scheme to transmit a binary message by the Shannon-Hartley theorem:
At 1 impression per person per second, it might be on the order of 1.44*(1/100) or roughly 1 bit of data per minute per viewer. I'm sure my math is wrong. But a few dozen people might be able to achieve primitive Morse code-style communication across the globe or even space.
It would be interesting to see if/how results differ when participants are shown the answers after the experiment, like with your comment about time travel.
Governments probably worked all of this out decades ago if there's anything to it. But it might mean that aliens have faster than light communication. We can imagine petri dish brains or neural nets trained for remote viewing. Sort of an FTL dialup modem.
As long as we're going off the deep end, I think this works through the magic of conscious awareness, that science may never be able to explain. Loosely it means that God the universe and everything fractured itself into infinite perspectives to have all subjective experiences and not have to be alone with itself anymore. So rather than being a brain in a box/singularity, source consciousness created all of this when something came from nothing. Consciousness is probably multidimensional above 4D and 5D, able within the bounds of physics to select where it exists along the multiverse, like hopping between the meshing of gears that form reality. Or Neo in The Matrix. So thought may make life energy ripples like gravity waves on the astral plane where time and distance don't matter. So feelings may be able to affect the probability of quantum wave collapse.
This has all sorts of ramifications. Time seems to have an arrow even though quantum mechanics is mostly symmetric in time. If we assume that free will doesn't exist, then people would make the same choices if we got in a time machine and watched them choose repeatedly. But if we assume that free will exists, then people would seem to choose randomly with a probability distribution, which would make time travel impossible since no sequence of events could be replayed with 100% accuracy. Similarly to how the 3 body problem can't be predicted beyond a certain timeframe. So we could have time travel or free will, but not both. This latter case seems to more closely match how the universe works with observing stuff like the double slit experiment, and our subjective experience of having free will that so-called experts tell us is only an illusion.
It could also mean that synchronicity and manifestation are more apparent to someone having the experience than to the rest of us in the co-created reality. So the subject and conductor of an experiment might witness different outcomes from their vantage points in the multiverse, with echoes of themselves in the other realities, even though the total probability adds up to one. Like how you are still you now and one second before now or after now. It's unclear if subjective mental efforts can hold sway over the shared reality. That gets into metaphysics and concepts like as above, so below.
Wall of text above. Response is wall of text. TLDR; Lots of acceptance issues that bias towards lack of exploration/acceptance.
Read through the Ganzfeld experiments, and many of the same issues with the field jump out fairly readily.
1) The opinion from society at large, is generally negative and dismissive. Therefore, much of the work is to discredit, rather than to positively try to replicate or support.
2) There's a chilling effect on those who might actually possess any such ability. Per above, the societal response is mostly negative. Much risk, little reward, and generally a promise of being a social outcast, pariah, or weirdo. Possibly an experimental guinea pig forever with needles in your skull as the only reward.
3) Social antagonism, since almost nobody likes the idea somebody else is wandering around thought scraping them like LLMs pulling your website design. Historically , mostly shown negatively (possibly for good reason) in literature, TV, movies, video games. Governments don't like you, even your own. Corporations don't like you. Most have not civilians don't like you. If it's positive (there's some lately) you're usually that wacky eccentric who solves cold cases or talks to ghosts.
4) Jealousy / envy / greed. One of the most reliable responses of having almost anything unique in human civilization is desire for others to take what you have.
5) For those that can demonstrate such abilities to themselves, there's an Extreme benefit for not reporting, and not socially revealing. Chief example, like always, money. If someone can thought scan your plans for corporate, or stock choices, then why ever report? Better to read the thoughts of Amazon, Microsoft, Google, Nvidia, ect... executives, buy or sell before anybody has the information without any risk of insider trading accusations, get rich and powerful, and never, ever tell anybody anything.
6) There's a liar's dividend issue. Any group that might possess such abilities (espionage obviously a strong candidate) gains far more by spreading false debate, causing the argument to be about lies or red herrings, and maintaining their secret edge.
7) There's a weaponization issue. What did the government immediately do? Try to weaponize. If you're opposed to being used as a government weapon, there's not much motivation.
Has many of the same issues that animal coginition, animal conciousness, and animal language had for years. An implied threat to the researchers that they may not be the most superior, or that humanity may not be all that special. Up until the early 2000's, most animal consciousness or intelligence work that proposed anything other than severely sub-human was heavily dismissed.
Personal favorite was Alex the parrot [1], that asked questions I'm not sure most humans would ask about objects and the world. Yet, general academia response ... mostly negative. General subject has gotten much more attention lately though, and so maybe some of the ESP / PSI ideas will eventually also. Mishka Wants Waffles!
The article mentions about halfway down the page that what made the 80s road rendering technique possible was racing the beam. Where say an Atari 2600 would toggle the color at certain pixel counts as the TV's electron beam swept the screen, producing graphics that seemed otherwise impossible from such underpowered hardware:
Some engines allowed for say 8 hardcoded sprites this way by toggling colors at each sprite's position, with various rules about overlapping, so sprites would flicker sometimes when they were next to each other.
I'm late to the party due to work and holiday gatherings, but just wanted to say that this is the first glimmer of hope on the horizon in 25 years.
The slowing down of Moore's law was well-understood by the time I was graduating from college with my Electrical and Computer Engineering (ECE) degree in 1999. The DEC Alpha was 700 MHz and they had GHz chips in labs, but most of us assumed that getting past 3-4 GHz was going to be difficult or impossible due to the switching limit of silicon with respect to pipeline stages. Also, on-chip interconnect had grown from a square profile to a ribbon shape that was taller than wide, which requires waveguide analysis. And features weren't shrinking anymore, they were just adding more layers, which isn't N^2 scaling. The end arrived just after 2007 when smartphones arrived, and the world chose affordability and efficiency over performance. Everyone hopped on the GPU bandwagon and desktop computing was largely abandoned for the next 20 years.
But I've been hearing more about High-Performance Computing (HPC) recently. It's multicore computing with better languages, which is the missing link between CPU and GPU. What happened was, back in the 90s, game companies pushed for OpenGL on GPUs without doing the real work of solving auto-parallelization of code (with no pragmas/intrinsics/compiler hints etc) for multicore computers first. We should have had parallel multicore, then implemented matrix and graphics libraries over that, with OpenGL as a thin wrapper over that DSP framework. But we never got it, so everyone has been manually transpiling their general solutions to domain-specific languages (DSLs) like GLSL, HLSL and CUDA, at exorbitant effort and cost with arbitrary limitations. Established players like Nvidia don't care about this, because the status quo presents barriers to entry to competitors who might disrupt their business model.
Anyway, since HPC is a general computing approach, it can be the foundation for the propriety and esoteric libraries like Vulkan, Metal, etc. Which will democratize programming and let hobbyists more easily dabble in the dozen or so alternatives to Neural Nets. Especially Genetic Algorithms (GAs), which can automatically derive the complex hand-rolled implementations like Large Language Models (LLMs) and Stable Diffusion. We'll also be able to try multiple approaches in simulation and it will be easier to run a high number of agents simultaneously so we can study their interactions and learning models in a transparent and repeatable fashion akin to using reproducible builds and declarative programming.
That's how far John Koza and others got with GAs in the mid-2000s before they had to be abandon them due to cost. Now, this AmpereOne costs about an order of magnitude more than would be required for a hobbyist 256 core computer. But with inflation, $5-10,000 isn't too far off from the price of desktop computers in the 80s. So I expect that prices will come down over 3-5 years as more cores and memory are moved on-chip. Note that we never really got a System on a Chip (SoC) either, so that's some more real work that remains. We also never got reconfigurable hardware with on-chip FPGA units that could solve problems with a divide and conquer strategy, a bit like Just in Time (JIT) compilers.
I had detailed plans in my head for all of this by around 2010. Especially on the software side, with ideas for better languages that combine the power of Octave/MATLAB with the approachability of Go/Erlang. The hardware's pretty easy other than needing a fab, which presents a $1-10 billion barrier. That's why it's so important that AmpereOne exists as a model to copy.
The real lesson for me is that our imaginations can manifest anything, but the bigger the dream, the longer it takes for the universe to deliver it. On another timeline, I might have managed it in 10 years if I had won the internet lottery, but instead I had to wait 25 as I ran the rat race to make rent. Now I'm too old and busted so it's probably too late for me. But if HPC takes off, I might find renewed passion for tech. Since this is the way we deliver Universal Basic Income (UBI) and moneyless resources through 3D printing etc outside of subscription-based AI schemes.
1. What languages do you think are good ones for HPC?
2. Re GAs, Koza, and cost:
In Koza's book on GAs, in one of the editions, he mentions that they scaled the performance of their research system by five orders of magnitude in a decade. What he didn't mention was that they did it by going from one Lisp Machine to 1000 PCs. They only got two orders of magnitude from per-unit performance; the rest came from growing the number of units.
Of course they couldn't keep scaling that way for cost reasons. Cost, and space, and power, and several other limiting factors. They weren't going to have a million machines a decade later, or a billion machines a decade after that. To the degree that GAs (or at least their approach to them) required that in order to keep working, to that degree their approach was not workable.
Ya that's a really good point about the linear scaling limits of genetic algorithms, so let me provide some context:
Where I disagree with the chip industry is that I grew up on a Mac Plus with a 1979 Motorola 68000 processor with 68,000 transistors (not a typo) that ran at 8 MHz and could get real work done - as far as spreadsheets and desktop publishing - arguably more easily than today. So to me, computers waste most of their potential now:
As of 2023, Apple's M2 Max had 67 billion transistors running at 3.7 GHz and it's not even the biggest on the list. Although it's bigger than Nvidia's A100 (GA100 Ampere) at 54 billion, which is actually pretty impressive.
If we assume this is all roughly linear, we have:
year count speed
1979 6.8e4 8e6
2023 6.7e10 3.7e9
So the M2 should have (~1 million) * (~500) = 500 million times the computing power of the 68000 over 44 years, or Moore's law applied 29 times (44 years/18 months ~= 29).
Are computers today 500 million times faster than a Mac Plus? It's a ridiculous question and the answer is self-evidently no. Without the video card, they are more like 500 times faster in practice, and almost no faster at all in real-world tasks like surfing the web. This leads to some inconvenient questions:
- Where did the factor of 1 million+ speedup go?
- Why have CPUs seemingly not even tried to keep up with GPUs?
- If GPUs are so much better able to recruit their transistor counts, why can't they even run general purpose C-style code and mainstream software like Docker yet?
Like you said and Koza found, the per-unit speed only increased 100 times in the decade of the 2000s by Moore's law because 2^(10 years/18 months) ~= 100. If it went up that much again in the 2010s (it didn't), then that would be a 10,000 times speedup by 2020, and we could extrapolate that each unit today would run about 2^(24 years/18 months) ~= 100,000 times faster than that original lisp machine. A 1000 unit cluster would run 100 million or 10^8 times faster, so research on genetic algorithms could have continued.
But say we wanted to get back into genetic algorithms and the industry gave computer engineers access to real resources. We would design a 100 billion transistor CPU with around 1 million cores having 100,000 transistors each, costing $1000. Or if you like, 10,000 Pentium Pro or PowerPC 604 cores on a chip at 10 million transistors each.
It would also have a content-addressable memory so core-local memory appears as a contiguous address space. It would share data between cores like how BitTorrent works. So no changes to code would be needed to process data in-cluster or distributed around the web.
So that's my answer: computers today run thousands or even millions of times slower than they should for the price, but nobody cares because there's no market for real multicore. Computers were "good enough" by 2007 when smartphones arrived, so that's when their evolution ended. Had their evolution continued, then the linear scaling limits would have fallen away under the exponential growth of Moore's law, and we wouldn't be having this conversation.
> 1. What languages do you think are good ones for HPC?
That's still an open question. IMHO little or no research has happened there, because we didn't have the real multicore CPUs mentioned above. For a little context:
Here are some aspects of good programming languages;
Most languages only shine for a handful of these. And some don't even seem to try. For example:
OpenCL is trying to address parallelization in all of the wrong ways, by copying the wrong approach (CUDA). Here's a particularly poor example, the top hit on Google:
Calculate X[i] = pow(X[i],2) in 200-ish lines of code:
But unfortunately neither OpenCL nor Octave/MATLAB currently recruit multicore or cluster computers effectively. I think there was research in the 80s and 90s on that, but the languages were esoteric or hand-rolled. Basically hand out a bunch of shell scripts to run and aggregate the results. It all died out around the same time as Beowulf cluster jokes did:
After 35 years of programming experience, here's what I want:
- Multi-element operations by a single operator like in Octave/MATLAB or shader languages like HLSL
- All variables const (no mutability) with get/set between one-shot executions like the suspend-the-world io of ClojureScript and REDUX state
- No monads/futures/async or borrow checker like in Rust (const negates their need, just use 2x memory rather than in-place mutation)
- Pass-by-value copy-on-write semantics for all arguments like with PHP arrays (pass-by-reference classes broke PHP 5+)
- Auto-parallelization of flow control and loops via static analysis of intermediate code (no pragmas/intrinsics, const avoids side effects)
- Functional and focused on higher-order methods like map/reduce/filter, de-emphasize Java-style object-oriented classes
- Smart collections like JavaScript classes "x.y <=> x[y]" and PHP arrays "x[y] <=> x[n]", instead of "pure" set/map/array
- No ban on multiple inheritance, no final keyword, let the compiler solve inheritance constraints
- No imports, give us everything and the kitchen sink like PHP, let the compiler strip unused code
- Parse infix/prefix/postfix notation equally with a converter like goformat, with import/export to spreadsheet and (graph) database
It would be a pure functional language (impurity negates the benefits of cryptic languages like Haskell), easier to read and more consistent than JavaScript, with simple fork/join like Go/Erlang and single-operator math on array/matrix elements like Octave/Matlab. Kind of like standalone spreadsheets connected by shell pipes over the web, but as code in a file. Akin to Jupyter notebooks I guess, or Wolfram Alpha, or literate programming.
Note that this list flies in the face of many best practices today. That's because I'm coming from using languages like HyperTalk that cater to the user rather than the computer.
And honestly I'm trying to get away from languages. I mostly use #nocode spreadsheets and SQL now. I wish I could make cross-platform apps with spreadsheets (a bit like Airtable and Zapier) and had a functional or algebra of sets interface for databases.
It would take me at least a year or two and $100,000-250,000 minimum to write an MVP of this language. It's simply too ambitious to make in my spare time.
After sleeping on this, I realized that I forgot to connect why the aspects of good programming languages are important for parallel programming. It's because it's already so difficult, why would we spend unnecessary time working around friction in the language? If we have 1000 or 1 million times the performance, let the language be higher level so the compiler can worry about optimization. I-code can be simplified using math approaches like invariance and equivalence. Basically turning long sequences of instructions into a result and reusing that with memoization. That's how functional programming lazily evaluates code on demand. By treating the unknown result as unsolved and working up the tree algebraically, out of order even. Dependent steps can be farmed out to cores to wait until knowns are solved, then substitute those and solve further. So even non-embarrassingly parallel code can be parallelized in a divide and conquer strategy, limited by Amdahl's Law of course.
I'm concerned that this solver is not being researched enough before machine learning and AI arrive. We'll just gloss over it like we did by jumping from CPU to GPU without the HPC step between them.
At this point I've all but given up on real resources being dedicated to this. I'm basically living on another timeline that never happened, because I'm seeing countless obvious missed steps that nobody seems to be talking about. So it makes living with mediocre tools painful. I spend at least 90% of my time working around friction that doesn't need to be there. In a very real way, it negatively impacts my life, causing me to lose years writing code manually that would have been point and click back in the Microsoft Access and FileMaker days, that people don't even think about when they get real work done with spreadsheets.
TL;DR: I want a human-readable language like HyperTalk that's automagically fully optimized across potentially infinite cores, between where we are now with C-style languages and the AI-generated code that will come from robotic assistants like J.A.R.V.I.S.
My take as someone living in the northwest and witnessing exponential growth over my lifetime:
- We clearcut 95% of US timber, now the remaining 5% is in states like Oregon, Washington and Idaho. Some wood is an order of magnitude more expensive or simply unavailable, with prices held artificially low by importing wood and deforesting Canada, for example.
- Good, fast, cheap (pick any two). We build homes good and fast, but little effort is being made to reduce costs by an order of magnitude by using materials like hempcrete.
- Zoning laws have been changed so that row houses and 3-4 story apartments can be built right up to the road, maximizing profits but externalizing urban decay. High price/low value.
- The US adopted a service economy when it sent 100,000 factories overseas under the GW Bush administration in the 2000s after Clinton signed NAFTA in the 1990s. Now a smaller fraction of the population does the work of building homes, with most everyone else pushing paper, so charges what it wishes.
- People with high carbon footprints had more children. Bigger houses for high consumers left less housing for thriftier people.
- Garages, lawns and commuting wasted immeasurable resources after we could/should have transitioned to sustainable energy and transportation in the 1980s but chose to double down on the nuclear family, trickle-down economics, etc.
- The cost of those unnecessary roads was factored into property taxes, driving home prices higher.
- Nearly unlimited financing was available for home loans, while almost none was devoted to startups and other disruptive industries. We got what we paid for (McMansions instead of, I don't know, 3D printed homes).
- Modern techniques were almost never adopted into building codes. So rather than running wiring conduit through walls or encouraging similar practices to make additions and remodeling easier, we encouraged tearing down and building from scratch, which wasted countless trillions of dollars.
- The tax system doesn't value trees, existing structures, previous money spent on remodels, etc. So developers profit from bulldozing homes and razing lots in a day to build houses in a month. The human and environmental cost of that can never compare to moving homes, remodeling them, etc.
- Our culture and entertainment tell everyone they need granite countertops and $50,000 gas guzzlers. Things are expensive because other people consume more and more and more with insatiable appetites and expectations.
- Wealthy people who could have steered our culture in a positive direction chose to do nothing. Or worse, actively encouraged extractive and vulture economic policies to enrich themselves further.
Admittedly, I'm way behind on how this translates to software on the newest video cards. Part of that is that I don't like the emphasis on GPUs. We're only seeing the SIMD side of deep learning with large matrices and tensors. But there are at least a dozen machine learning approaches that are being neglected, mainly genetic algorithms. Which means that we're perhaps focused too much on implementations and not on core algorithms. It would be like trying to study physics without change of coordinates, Lorentz transformations or calculus. Lots of trees but no forest.
To get back to rapid application development in machine learning, I'd like to see a 1000+ core, 1+ GHz CPU with 16+ GBs of core-local ram for under $1000 so that we don't have to manually transpile our algorithms to GPU code. That should have arrived around 2010 but the mobile bubble derailed desktop computing. Today it should be more like 10,000+ cores for that price at current transistor counts, increasing by a factor of about 100 each decade by what's left of Moore's law.
We also need better languages. Something like a hybrid of Erlang and Go with always-on auto-parallelization to run our human-readable but embarrassingly parallel code.
Short of that, there might be an opportunity to write a transpiler that converts C-style imperative or functional code to existing GPU code like CUDA (MIMD -> SIMD). Julia is the only language I know of even trying to do this.
Those are the areas where real work is needed to democratize AI, that SWEs like us may never be able to work on while we're too busy making rent. And the big players like OpenAI and Nvidia have no incentive to pursue them and disrupt themselves.
Maybe someone can find a challenging profit where I only see disillusionment, and finally deliver UBI or at least stuff like 3D printed robots that can deliver the resources we need outside of a rigged economy.
When I was growing up in the 1980s, the government provided public services like this and more.
Somewhere along the way a government of the people, by the people, for the people, perished from the earth.
Not from just one ailment like trickle-down economics, but from a thousand cuts delivered retroactively by revisionist history, until even the youth became their own wardens, and hope was finally lost.
I’d be interested a concrete example of how you believe the 1980s was somehow different as far as government services today - and our course what country.
It was USA. It's hard to explain how the national debt eroded the government's ability to provide public education and other government services like it used to, but here's an attempt:
The irony is that we're still debating this after 40+ years with new calls to eliminate the Department of Education, so in a way, things haven't changed. That's not to say they didn't change, just that any progress made has been eroded back to square one. That's the tragedy I was trying to convey.
I'd like to touch on how your tactic of undermining my point by asking me to do your legwork, then waiting for me or someone else to jump in and defend it, and then claiming victory when no one shows up, creates a climate of fear and ignorance.
On the one hand, it's up to me to defend my point. But on the other, widespread criticism of the experiences of people who were there promotes division. Allowing the divide and conquer strategy to sway people into voting against their own self-interest, undermining democracy and the rule of law.
The way these debates used to play out was that the academic position was held in high regard. Because it covers a broad context. Asking for specific answers that were already generally known was considered a distraction, or a rhetorical question used to distract from the speaker's main point.
Ask yourself what common knowledge is known about education, where it receives its funding, who teaches and who attends. How that's changed in recent decades due to socioeconomic forces. Who might benefit from such changes, and who suffers due to them. And would I state something that can't be backed up by overwhelming evidence in the public record?
Those critical thinking skills used to be prerequisites to winning debates. Now it's easier to just dismiss arguments because others don't have time to get involved, or just want to keep their heads down.
I'd like to see a term for your tactic get added to the logical fallacies list. I just don't know what to call it. It's analogous to argumentum ad logicam, where you're implying that what I said was false, not because my argument was invalid, but because I omitted evidence that anyone could add or fact-check.
The Department of Education did not exist in the US until 1980. It was created by Carter and signed into law in late 1979. Reagan took office in 1981.
It of course had a predecessor going far back. At any rate, in 1980, the Department of Education had a budget of 14 billion. Today, it gets about ~80 billion. A quick Google claims that 14 billion in 1980 dollars is equal to 53 billion today. So the federal government seems to be funding education at much higher levels. Of course, most education funding is at the state and local level, not federal.
This generally goes along with expectations, since the USA spends more per capita on education than almost any other country in the world at #5.
I agree that there are major problems with education in the US, but more money is not going to solve them. It simply gets redirected into unproductive outlets and education "science" quackery like three-cueing. We could probably go back to 1980 levels of spending with the right reforms, and then look at doing things like 2xing teacher salaries and reducing classroom sizes.
Beyond K12 stats are harder to track down, but "continuing education" and government-funded classes for adults are easy to find. And of course, we have the Internet.
> I'd like to touch on how your tactic of undermining my point by asking me to do your legwork, then waiting for me or someone else to jump in and defend it, and then claiming victory when no one shows up, creates a climate of fear and ignorance.
> Sealioning (also sea-lioning and sea lioning) is a type of trolling or harassment that consists of pursuing people with relentless requests for evidence, often tangential or previously addressed, while maintaining a pretense of civility and sincerity ("I'm just trying to have a debate"), and feigning ignorance of the subject matter. It may take the form of "incessant, bad-faith invitations to engage in debate", and has been likened to a denial-of-service attack targeted at human beings. The term originated with a 2014 strip of the webcomic Wondermark by David Malki, which The Independent called "the most apt description of Twitter you'll ever see".
I wish there was an independent unit test suite for operating systems and other proprietary software.
The suite would run the most-used apps and utilities against updates and report regressions.
So for example, the vast majority of apps on my Mac can't run, because they were written for early versions of OS X and OS 9, even all the way back to System 7 when apps were expected to still run on 4/5/6. The suite would reveal that Apple has a track record of de-prioritizing backwards compatibility or backporting bug fixes to previous OS versions.
You don’t need to do anything special to “reveal” that Apple doesn’t prioritize backwards compatibility. That is very well known. For example, standard practice for audio professionals is to wait a year or more to upgrade MacOS, to give all the vendors a chance to fix what broke.
Funnily enough, that’s what the UNIX™ certification is, in some—much too limited for your purposes—sense :) See also Raymond Chen’s story of buying one of everything[1].
Eh, I agree in a sense, but I'm also ok without the same level of backwards compatibility that Windows is beleaguered by. Every new version of Windows is little more than a thin veneer of whatever they think is a popular choice for UI design that year, and with that comes a clumsy amalgamation of hugely varying settings dialogs, the classic registry, all the goop. Meanwhile on macos, I don't expect very complex software to maintain perfect compatibility, but I can reasonably expect most of the stuff I use to carry forward 5+ years. Parallels and Omnifocus were the exceptions, but 1password from 2012 is still kicking, Data Rescue 3 somehow still works, I'm sure even Adobe CS6 would even though it's from the Carbon era.
Just as well, although I loathe some of the choices Apple's made over the years, such as it's own Settings app, the overall UI would be pretty recognizable if me from 20 years ago found a time machine (pun intended). I recently bought a new mac, and it occurred to me that it feels basically like the E-Mac I used in middle school all those years ago, albeit with the occasional annoyance I wouldn't have been aware of then.
Out of curiosity, I just checked, and while the CS6 installer is 32-bit, Photoshop CS6, at least, is 64-bit.
The .app icon shows the "circle slash" overlay, however, and attempting to launch it from the Finder (Sequoia 15.1 running on an Intel Mac) yields the OS-level "needs to be updated" alert without actually exec'ing the binary.
The Mach-O executable in "Contents/MacOS" loads and runs successfully when called directly from a shell prompt, however, and displays an application-generated "Some of the application components are missing…Please reinstall…" alert.
Which is actually encouraging, given that I'm attempting to run it directly from the Master Collection .dmg image without actually installing anything, which, given all the prerequisite detritus Adobe apps habitually scatter around the system when installed, I wouldn't expect to work even on a supported OS.
Less encouraging is the fact that the app-generated alert box text is blurry, suggesting the application wouldn't properly support Retina displays even if it could be cajoled into running.
Interesting experiment, thanks for the detail, I think I do still have my installers backed up somewhere, if not the actual disks.
> Less encouraging is the fact that the app-generated alert box text is blurry, suggesting the application wouldn't properly support Retina displays even if it could be cajoled into running.
This was actually the main reason I simply stopped using it (aside from not needing it professionally anymore and Adobe switching to subscription after CS6). CS6 was the last version before laptops started shipping with high dpi screens, and Carbon (from what I understood at the time) was simply the older cocoa UI framework that was replaced as Apple switched to a more versatile SDK. Sibling commentor suggested it was because Carbon was 32-bit only and that seems plausible, I hadn't experimented heavily with Obj-C or Apple dev, but I'm sure the switch was a massive undertaking.
64-bit Carbon (as a port of 32-bit Carbon, which itself was an aid for porting apps from classic Mac OS to OS X) was originally loudly announced and then relatively quietly killed[1]. Not clear if any code was ever actually written, but given the announcement was at a keynote I expect that somebody, somewhere, at least judged it feasible.
I don’t remember photoshop ever not being hidpi safe. In fact, I don’t remember a single app. Apple was touting for years that a retina Mac would be coming along.
Huh? In service of what? There’s not all that much inherently good about backwards compatibility, but you’re really implying that deprioritising it is a misdeed. If I wanted to use an OS that prioritised backwards compatibility more than macOS, I’d use Windows, and suffer through the downsides of that trade-off. I’m happy using an OS that balances things in a way that’s more in line with my priorities.
This isn't backwards compatibility though - the example in the post here is a major bug in an actively supported API.
Apple dropping support for old things over time is a reasonable philosophy, but Apple breaking current things unintentionally and then neither fixing nor communicating about it, primarily because they don't actively engage with their ecosystem in general, is a problematic choice on their part.
I've been watching the evolution of the web since 1995, and I remember when css got popular in the late 90s thinking that it didn't match real-world use cases. Somehow design-by-committee took us from drawing our sites with tables in the browser's WYSIWYG editor, to not being able to center text no matter how much frontend experience we have.
Css jumped the shark and today I'd vote to scrap it entirely, which I know is a strong and controversial statement. But I grew up with Microsoft Word and Aldus PageMaker, and desktop publishing was arguably better in the 1980s than it is today. Because everyone could use it to get real work done at their family-owned small businesses, long before we had the web or tech support. Why are we writing today's interfaces in what amounts to assembly language?
Anyway, I just discovered how float is really supposed to work with shape-outside. Here's an example that can be seen by clicking the Run code snippet button:
Notice how this tiny bit of markup flows like a magazine article. Browsers should have been able to do this from day one. But they were written by unix and PC people, not human interface experts like, say, Bill Atkinson. Just look at how many years it took outline fonts to work using strokes and shadows, so early websites couldn't even place text over images without looking like Myspace.
I think that css could benefit from knowing about the dimensions of container elements, sort of like with calc() and @media queries (although @media arguably shouldn't exist, because mobile shouldn't be its own thing either). And we should have more powerful typesetting metaphors than justify. Edit: that would adjust font size automatically to fit within a container element.
IMHO the original sin of css was that it tried to give everyone a cookie cutter media-agnostic layout tool, when we'd probably be better off with the more intuitive auto flow of Qt, dropping down to a constraint matrix like Apple's Auto Layout when needed.
Disclaimer: I'm a backend developer, and watching how much frontend effort is required to accomplish so little boggles my mind.
Your comment is some interesting food for thought, but I wanted to respond to a couple statements you made:
> not being able to center text no matter how much frontend experience we have
Not being able to center things is a bit of a meme, but flexbox was introduced back in 2009 and has been supported by major browsers for quite a long time. Centering text and elements is now extremely easy.
> css could benefit from knowing about the dimensions of container elements
You're in luck! Container queries were added to CSS fairly recently:
As someone who has struggled with getting CSS to do normal layout stuff that had clear precise semantics but required weird CSS trickery, it's actually more scary than lucky that stuff like container queries have arrived 30 years after CSS was introduced.
container queries have a very obvious chicken and egg problem if used a certain way: If this container is less than 30px wide, make its content 60px wide. Otherwise make it 20px wide. Now that container exists in a quantum state of being both 30 and 60px wide. I actually haven't looked into container queries to see how they ended up dealing with this yet.
Obviously this is a very contrived example but it can also express itself in subtler ways.
CSS was doomed from the start, IMO. It was a poorly-targeted solution to a the wrong problem that could never have worked. But you don't have to use it. You can keep using tables for layout, all browsers render them well (generally faster than CSS, and with better progressive rendering too), real-world screenreaders and the like have had great support for them since before CSS emerged, there's no actual downside.
> so early websites couldn't even place text over images
I take offense at this! We weren't that stupid back then! We just put the text 5 times on the page, with position: relative, 4x in the outline color, each copy with a 1px offset in a different direction, and the final one in the text color. That trick worked with pretty early CSS.
I made a handful of corporate sites, e-commerce, CMS and even flash lol, just out of college with boring defense contractor job. I didn't have time to be picky because I had a full time job so I always worked with whatever they had and a lot of stuff was made in Dreamweaver, and even a corporate site exported from Word. The code was awful but worked everywhere. And you always had to get into code anyway, so there was no time to even think about which of the tools was best. Something was always missing in some integration so you gotta code/script. I think a lot of people made money in the last cycle tech cycles and had nothing to do but create or fund a bunch of stuff to confuse the marketplace.
I know that feeling, I was nervous about the risks you listed when I was broke and started lifting around 2000/2001 anyway. Trust me, the expense of losing one's health due to inactivity is higher than any financial cost. I trained intuitively, which mostly means going it alone at the school of hard knocks by trying everything and keeping what sticks. I got hurt countless times from ego lifting too much weight in my 20s, not going through the full range of motion, etc, but nobody else got hurt on my watch. But athletes pretty much always recover anyway. That's the important part - learning that proper exercise, nutrition and supplementation makes the body fungible, so there's really no limit to what it's capable of. Now I'm one of the biggest guys at the gym as a lifelong natty and nobody's laughed at me since I got the eye of the tiger about 3 months in.
Ya this is the fallacy with leaving human rights up to the states.
There are certain shared values which must be defended in order to have a free society. Which means that people who don't see a problem with these eventualities (due to their intolerance) can't be allowed to control government. The tolerant can't tolerate intolerance because the intolerant break the rules of the game and eventually collapse civil society.
Unfortunately I've watched intolerance reign for most of my life, definitely the last 25 years, which is roughly everything after the Dot Bomb, 9/11, etc. Now ignorance fueled by propaganda created by billionaires under wealth inequality determines our election outcomes and I worry that it may be too late to stop the global societal collapse that's coming.
Yes, I agree with you on this. But we are seeing one big bomb coming our way. Climate Change.
It is a race on what happens first. To me Climate Change will be like a cleansing, and since most governments of "first world" countries refused to address it over the past 50 years. Everyone will get to pay the price now :(
Now/soon the US is doubling down on speeding up Climate Change, and as usual the young will suffer the most.
https://en.wikipedia.org/wiki/Ganzfeld_experiment
Unfortunately the more it's replicated, the smaller the deviation seems to become. But if there is a deviation above random, say 1%, then we could use a large number of viewers and an error correction coding scheme to transmit a binary message by the Shannon-Hartley theorem:
https://en.wikipedia.org/wiki/Shannon–Hartley_theorem#Power-...
https://en.wikipedia.org/wiki/Error_correction_code
At 1 impression per person per second, it might be on the order of 1.44*(1/100) or roughly 1 bit of data per minute per viewer. I'm sure my math is wrong. But a few dozen people might be able to achieve primitive Morse code-style communication across the globe or even space.
It would be interesting to see if/how results differ when participants are shown the answers after the experiment, like with your comment about time travel.
Governments probably worked all of this out decades ago if there's anything to it. But it might mean that aliens have faster than light communication. We can imagine petri dish brains or neural nets trained for remote viewing. Sort of an FTL dialup modem.
As long as we're going off the deep end, I think this works through the magic of conscious awareness, that science may never be able to explain. Loosely it means that God the universe and everything fractured itself into infinite perspectives to have all subjective experiences and not have to be alone with itself anymore. So rather than being a brain in a box/singularity, source consciousness created all of this when something came from nothing. Consciousness is probably multidimensional above 4D and 5D, able within the bounds of physics to select where it exists along the multiverse, like hopping between the meshing of gears that form reality. Or Neo in The Matrix. So thought may make life energy ripples like gravity waves on the astral plane where time and distance don't matter. So feelings may be able to affect the probability of quantum wave collapse.
https://hackaday.com/2021/03/04/can-plants-bend-light-to-the...
This has all sorts of ramifications. Time seems to have an arrow even though quantum mechanics is mostly symmetric in time. If we assume that free will doesn't exist, then people would make the same choices if we got in a time machine and watched them choose repeatedly. But if we assume that free will exists, then people would seem to choose randomly with a probability distribution, which would make time travel impossible since no sequence of events could be replayed with 100% accuracy. Similarly to how the 3 body problem can't be predicted beyond a certain timeframe. So we could have time travel or free will, but not both. This latter case seems to more closely match how the universe works with observing stuff like the double slit experiment, and our subjective experience of having free will that so-called experts tell us is only an illusion.
It could also mean that synchronicity and manifestation are more apparent to someone having the experience than to the rest of us in the co-created reality. So the subject and conductor of an experiment might witness different outcomes from their vantage points in the multiverse, with echoes of themselves in the other realities, even though the total probability adds up to one. Like how you are still you now and one second before now or after now. It's unclear if subjective mental efforts can hold sway over the shared reality. That gets into metaphysics and concepts like as above, so below.
Happy holidays everyone!
reply