Hacker News new | past | comments | ask | show | jobs | submit login
SETI@home shuts down after 21 years (bleepingcomputer.com)
1216 points by ingve on March 3, 2020 | hide | past | favorite | 343 comments



Back then, I was quite young. Around 2002.

We were like 5 boys getting into overclocking. For our SETI team, the "Bücki crunching connection", from my small hometown in Germany.

I just tried to find an old screenshot from back in the day, and wow I found one, from 2002:

https://imgur.com/a/L5Q5PMR

So funny, it is all there: ICQ, mIRC; an icon to launch Quake III. And some SETI crunching stats. In Internet Explorer.

Seemingly we were actually crunching under one account for the team OC-CARD.de (http://www.setiatwork.com/team/teamstats.cgi?teamid=30308)

You might have done the same, but I am still sharing this because this has influenced me a lot:

I bought an AMD Duron, some arctic silver heat paste. I took a lead pencil to connect some dots on the CPU to unlock the multiplier freely, got a freaking heat sink, and overclocked the hell out of the Duron. I needed to hide this from my parents, but of course the plan was to crunch 24/7.

I found another screenshot, the file is called "duri@fsb133.jpg". Looks like I knew what I was doing:

https://imgur.com/a/eG9nmdO

Edit: looks like our team (OC-CARD.de) was actually among the top 200 of all SETI teams. Wow, yeah there were some serious people in the team, like "Butcho", ranking in the top 1000 of individuals. No idea who that guy was and where he got the compute resources from. That's the romantic part of that Internet era.

Edit2: turned this into a very quick blog post to properly archive my own memory here: https://gehrcke.de/2020/03/setihome-hibernation/


Oh man those screenshots really are a throwback:

- [x] Quake <3

- [x] Classic windows UI; custom scheme

- [x] Double height task bar

- [x] ICQ & mIRC for all your communications needs; I'll give you a pass for Outlook when there were clearly "better" alternatives like Pegasus ;o)

- [x] Broken icon in taskbar launcher area

I'm loving it, thanks for sharing!


Was about to write the same :)


Just wanted to say thx for writing a blogpost. Rare to see great history snippets left as a comment on HN with any link to a longer form content from the author’s perspective if one is curious.


Thanks for the kind words!


I was a new web developer in that era and the bad taste in my mouth left by that era's Internet Exploiter is still strong. LOL. ugh.


Younger developers truly don’t understand the pain and suffering IE inflicted


Try designing an email newsletter for Outlook. Microsoft really hasn’t moved on that far.


I hear HTML emails are still a "dark art"


They are. Many unsupportag tags. Thanks outlook.


“This site best viewed in Chrome”.


Dude, IE was Windows-only. It went against everything the open Internet standards advocated for. And on top of it, I was a Mac guy, and Mac IE was a completely different beast, so I had to code for Windows IE, Mac IE, then Safari and Firefox and eventually early Chrome. And Windows IE was always the motherfucker who you had to figure out these nasty CSS and JS hacks for to get, forget pixel-perfect, how about pixel-similar rendering vs. other browsers.

If you wanted pixel-perfect it was a fucking nightmare. Literally doubled your work time just to add WinIE compatibility. This was in the IE5/6/7 era.

Speaking of JS... all this was made worse because there were NO good JS libraries at the time to make things easier. Debugging consisted of alerts. Eventually, Prototype (and later Jquery) were literally godsends, and there was some JS lib (forgot the name) which added a reasonable debugger (this was before browsers themselves had reasonably good integrated debuggers!)

Chrome is not like that. The same rendering engine exists on Linux, Windows and Mac, and it has an open-source highly-compatible version, Chromium.


> and there was some JS lib (forgot the name) which added a reasonable debugger (this was before browsers themselves had reasonably good integrated debuggers!)

Firebug changed my life when it came out!


> Dude, IE was Windows-only.

Not originally it wasn’t. The UNIX versions are an interesting relic.


It was available on MacOS as well.


the grandparent post made the point that for most of its history in the late 90's and early 00's (pretty much all of the CRT iMac era), IE for Mac used an entirely different rendering engine than the Windows version. And then it was abandoned.


Up to version 5.5.


"This application is optimised for Chrome, but the default browser is Edge. Please cut and paste the following link into Chrome." Literally read this today.


That's also the case at my workplace. On some machines, IE11 was still the default browser, despite it not being supported.


Do you work for banks or something?


Chrome is open source and available on all OS. Try running IE with flash and activex on linux in 1999. People today don't know how easy they have it :) (not saying you in particular).


That's not really true; chromium is open source. Chrome, while based on chromium, is not.


Where have you seen this?


I've had sites not work in anything but chrome, specifically one of those bicycle registry sites. I think it was fixed.


Well, hover link styling and dynamic HTML in IE3/4 did push the web forward a bit. Not necessarily for the better, but it was influential, and defined some of the things that became standard.

But that was probably a bit earlier, since I was off Windows by '98 or '99.


That is why I am pissed when they said WebKit ( or even Chrome ) is the new IE when they have never lived or developed during the IE era, even jQuery came much later.


I ran it on the weekend once on the queue at Disney animation, maybe a hundred CPUs on SGI Origins. Went from zero to top ten-ish percentile in a half hour or so. Good times until I got a stern talking to by some systems guy.


Back in the day when I was working at Sun, a script was set up that people were running on their workstations all over the world. It set up the Seti at Home client and link it to a single account. It was very popular, and some people were running it on the most powerful lab servers they had. Sun was trading places with SGI for the number one spot for a while.


18 years ago I ran it for months in the off hours on all the companies servers in one datacenter (about 500 which was quite a lot at that time). Fun times.

Impressive that they survived for so long!


Yeah. You wouldn't believe the number of idiots that went around covertly attempting to run this (and similar) on the production Unix servers of their employer.

Saying that as the lead Solaris SysAdmin for a large telco project a few decades ago, with one manager in particular who repeatedly kept on firing it up on the main (high end SunFire) billing cluster servers. Even when expressly told not to. That guy was a f*cking moron. :(


Prolly he really wanted to find out that we’re not alone in the universe.

Sucks to know we’re alone and no one has reached out.


SETI was probably only ever going to catch an intentional transmission it's far from proof we're alone.


Nice attitude buddy, maybe the problem was you. Companies regularly waste 1000x more on executive salaries.


Heh Heh Heh. I knew someone would take that approach.

Put it this way. Explaining to the customer higher ups why that crap was running on their central billing cluster servers when an outage occurs and the place loses revenue ($mil's).

"Job ending" for the manager in question is a good description. ;)


This has little to do with seti.


This guy was literally running SETI@Home on the production billing cluster servers for a multi billion $ telco, and would not stop doing so.

He didn't even work directly for the Telco, but for a consulting place (Accen...). Think of the liability factor for that, when there was a production outage on the cluster. Try proving the software (maxing out cpu, screwing up instruction cache + context switching, etc) wasn't part of the cause. :(


Your employee’s lack of discipline and/or proper systems job control is the root issue here. Trying to associate it with seti is unfortunate.


Oh yeah, fully agree about that.

It just seems the SETI@Home (and other BIONC) projects seemed to attract people that do that kind of thing. Haven't seen it with anything else before, though I'd kind of expect cryptocurrency miners would have been similar. ;)


In case anyone reads the headline and thinks that the SETI research center itself is shutting down, or that the project failed in some way:

> "It's a lot of work for us to manage the distributed processing of data. We need to focus on completing the back-end analysis of the results we already have, and writing this up in a scientific journal paper," their news announcement stated.

Maybe it's PR speak, but it does make sense to me that there'd be a point of diminishing returns where enough data has been processed.


They've already completed construction of the ansible, the design of which was decoded several years ago.


Thank you for making me laugh at loud while making breakfast


The singular in "journal paper" makes me sad. Surely there'd be more papers in this topic, on distributed computing for one, on the methodology of the analysis for another, and on any results for third.


The BOINC paper was published last year:

https://link.springer.com/article/10.1007/s10723-019-09497-9


Thanks. Clearly been out of academia for too long.


> or that the project failed in some way

Well... We haven't found anyone we can talk to yet so, in a sense, we are still failing.

Eventually we will.


Perhaps, in the era of cloudcomputing, "crowdsourcing" computing power is no longer needed?


I doubt cloud computing fits for their use case. They need constant power, and using "cloud" would be much more expensive than to have their own hardware (and of course even more expensive than using someone's else's computer).

The reason why SETI@home is not popular anymore is because how our hardware adapted. Back in the days the CPUs were constantly operating at the same speed. So if you left your computer on, it was using the same amount of power and generated the same amount of heat. SETI@home took advantage of it, by saying, let's make use of that wasted energy to do something useful.

As computers became faster the heat and power consumption became a bigger issue, mechanisms were added to allow CPU scale its frequency down, and have "micro sleeps" during the time there's nothing to do. OSes similarly adapted. Instead of having an interrupt triggered at regular interval they implemented so called tickless mechanism, when those house keeping operations are fired when needed instead of regular intervals.

All those advancements helped a lot with portable computers (laptops, pads, phones etc) but also made it to the desktops and servers.

Because of those advancements it is no longer "free" to run SETI@home. If you run it, your computer will get warmer, it will turn extra fans too cool down and it will use more power, so people are less motivated to run it. That's what made me stop using SETI@home myself.


You're right, that might be the real reason. But did they report a decline in users or cpu hours? What I thought of was more like Amazon, Google or Microsoft "donating" some idle instances, or something like that.


Well, I don't think they reported it, but back in 2000 a lot of my friends were running SETI@home, today I don't know a single person that does it. Frankly I thought it was shutdown much earlier.

As for Amazon, Google, Microsoft donating, the same thing applies, it is no longer free unused power, it actually cost money to do it. It's weird to think they would just let use their resources that way when they also work on optimizing their power usage to the point of having system that automatically shuts down idle computers.

If they want to donate to a cause, it's more efficient for them to give them money, or at very least provide service with some discounts. But IMO if they were doing it, they would not do it quietly, so I doubt public cloud providers contribute anything to SETI.


Bonic is not shutting down either.


Maybe they found something


I remember being a younger pup and building fleets of machines (including overclocking, etc) for the sole purpose of running S@H, and all of the amazing software the spun up around it... SetiSpy, SetiDriver, SetiQ. I often wonder how machines of today would compare in work production to my old Duron 800, but alas.


> Duron

Now there's a name I haven't heard in a while. Duron, the poorerer man's Celeron.


This comment made me nod my head in agreement, laugh and also experience a wave of nostalgia. All at once.


I once overclocked one with a pencil. Just connected the solder points between some leads on the CPU package.


I remember doing this too, it unlocked the multiplier which they locked at the factory by slicing through some metal blocks with a laser. My Duron 600 would run at 800 with a normal cooler & fan, then I added a really loud Delta fan and a copper heatsink and it was stable at a magical 1Ghz. Eventually I got cocky enough to solder a resistor between a certain pin on the motherboard and earth which effectively increased all the voltage settings in the BIOS so I could push enough voltage through it to run at 1.2Ghz - it ran like this for years as my only PC. Good times :)


Seems to have been a fairly common trick with the Durons. I sadly never experienced the rush of trying it mostly because Athlons and PIIIs were all I'd ever played with when the 1ghz chips started coming out, but this was probably what made the Durons so attractive to student hackers. https://arstechnica.com/civis/viewtopic.php?f=8&t=840954

Tangent: I love that I can pull up forum threads from two decades ago.


I remember when I got my first free 1ghz PIII machine in high school. This was like 2009. Huge difference from my 450mhz PIII machine.


Ah! Athlons. My overclocking journey picked up when I built a DIY watercooler out of a bent 5mm cooper sheet (at which thickness you stop calling it "sheet"?) polished to a mirror finish. At some point, during the peak of easter european winter I kept the heat exchanger outside in the -20*C temperatures!


luuuucky. i only got the k6 :O


I had a K6-233 which was always on the verge of overheating (or maybe I was obsessing a bit too much about monitoring it). I envied my friend's K6-200, which was more stable it seemed.


My K6-233 got so hot it demagnetized its CPU fan. I noticed after the floating point unit fried the first time I tried to play some post-quake FPS.

After that, it would boot, but the antivirus software I was running said it had a virus. I guess that was the only msdos program in the boot sequence that performed floating point.


I had a K6-II which I put a big heatsink and large fan on (had to ghetto mod the mounting), sawed a 120mm hole in the case just above the cpu and put another fan there. It overclocked handsomely and stable after that. I don't remember the exact frequencies but I think it was 266MHz stock and clocked close to 400MHz with some modifications.


Hahah, I can relate so much with the 'poorerer' part, I'll raise you another one: my first '686' was a Cyrix 6x86 because those were the cheapest you could get. Eventually got my hands on a Pentium II and had to scavenge parts to replace my setup.

Good times. Nowadays I just buy a laptop with everything soldered on it and complain a lot if I have to reboot it every month.


In the late 90s I had a 486sx33 with one of those Cyrix 586 upgrade chips for the math co-processor slot. It was so hot it had a fan on it! Crazy!

I also maxed out the RAM to 32mb and upgrade the SRAM in my Trident video card so I could do 256 color in Windows 95.

I also had a 40mb Erwin Tape Drive for backing up my 100mb HDD. It wasn't fast but it was faster than reinstalling Tie Fighter from Floppies.


The "math co-processor" chip was actually a full DX33 and your SX33 CPU was disabled.

https://en.m.wikipedia.org/wiki/Intel_80487SX


My first was an AMD 286 @ 12MHz, still have it in a cigar box somewhere.


Started with a zx81, graduated to a 286 with a whopping 2MB RAM, which was really really much, but at a time when everyone else was getting 386s.


CyrixInstead ™ represent!


Waay back in the day.. 286 12mhz with 1mb ram and a 10mb RLL drive. Setting the jumpers on the drive controller block was a nightmare that I still have. I think there were 16 or 32 pins and all the jumpers had to be just right or no dice (no drive). Further along I had a 40mhz Cyrix 4x86 I think, with 4mb ram and a Trident VGA 256 color card. I remember the ram being old school block chips and not SIMM modules. It's been a long time, so I could be mistaking the memory type between mobos. Was a great 3.11 machine for the time.


Cyrix/Nexgen 5x86/133!


Celeron 300A, the relative poorer man's Pentium


Weren't they also supported by the Abit BP6 motherboard, letting you use two of them as poorer man's Xeons?


Indeed they were. I had that exact setup. Overclocked the hell out of them. Unlike the 550 MHz someone else suggested, I'm 97.6% sure I only had them clocked at 450 MHz.


Yup, dual 450 celery was my gaaaangster setup at my silly startup :)


Dual celery story: Me being around 14, not allowed to build my own PC, but I got to spec it, I ordered dual CPU's. The company put in a dual slot 1 motherboard, and then two celerons in slot1 adapters.. that didn't support SMP. After complaining for a while, they traded the two celerons and adapters for a single P3 (I think I was being had). It took a year or two before I was able to afford the second P3. I still hold a grudge towards that particular computer shop, or I would, hadn't they gone out of biz. I did run a lot of SETI@home on that P3, it earned me my first two certificates :D


I built a bunch of them and still have mine in the attic. But alas your probably like me and by the time I actually purchased one for myself the overclocking abilities were much less than the original ones which went from 300->500Mhz fairly easily. IIRC its because Intel was selling 400 and 450Mhz ones and binning all the good ones into the upper tier bins.


I had a BP6 and remember 450 too (I may still have it tucked away somewhere). I think there was a 333 that could be pushed to 550 or something but not in SMP and I think it was slot 1 and not socket 370.


Yes, I owned two of those - alas they ended up being the period of bad capacitors but for the money - wow. Was amazing and for many, one of the only dual cpu systems they ever owned as a home user - even to this day unless you buy a cheap second hand workstations/server platform of ebay or the like.


+1 here who had the Abit BP6 dual celery setup. Had a pair of 400s running at close to 500 all the time. Both were lapped down to copper running FEP32 global win coolers. The wonders of Windows 2000 gaming machine with the GeForce 256 which was quickly followed by a GeForce 2 GTS upgrade as soon as it game out. Running quake on one CPU and no stutter MP3s on priority on the other. Good times.


YES. That was the first rig I ever built and I can still remember installing Slackware on it from a zillion floppies.


oh man I had a Celeron 300A that ran at 464mhz for yeeeeears. Hit the 504(?) day uptime bug where the counter reset to 0.


464? I run mine at just 450 and I was so proud of it. Now, this 464 -> 450 explains (I hope so) why I lost so many online Quake 3 battles ;]


yeah it was something like, you ran a lower clock speed but a higher multiplier? I've been out of the hardware game... well, ever since then. But that's my recollection. 450 vs 464 was the first place you were likely to see stability issues depending on your specific setup, but most were stable to at least 464.


I still have 2 of these in my old hardware box. I used to run them at 4.5x100MHz on Abit BH6 single CPU mobo. Great way for me to bump performance as a poor CompSci student. And yeah, I ran SETI@home on them as well.


Overclocked to 450! Way more bang for your buck!


Those were the days when I could give full specs for 20 or so CPUs off top of my head.Good times.


I was right along with you when I was helping my parents pick out their next PC purchase, a p3 450, and from then on just consumed every single cpu article and review I could find, so much knowledge that I picked up about the computer industry in general has helped me later in my career.


the best part is that durons were better than celerons and cheaper for quite some time


The duron had a pencil trick which unlocked it somehow. I don't remember the details.


The pencil trick:

"Unlocking the Duron and Athlon Using [a] Pencil

Hold your CPU in your left hand and look very closely at the CPU, so as to clearly see the L1 bridges.

Use a business card to separate the bridges so that you do not connect the L1 bridges to each other. Work your way across the bridges from left to right (using a business card as a separation tool) and connect the bridges by rubbing the pencil back and forth over the bridges about twenty times until it is dark black, not the normal gold color. Make sure that all the bridges are reconnected, but not touching each other, and you are on your way.

I know it sounds incredible, but that is actually all there is to it. Your processor, if done correctly, is ready to be overclocked. It can now be set to run at different clock frequencies, eliminating the need to increase the FSB."

http://computer-communication.blogspot.com/2007/06/unlocking...


I had never heard of this trick. I love it though. Reminds me of using a hole punch to make a double sided floppy disk from a cheaper priced single sided floppy.[0]

Of course there was the AMD multi-core CPUs that had some of the cores disabled via software and sold at cheaper prices. Of course there were software hacks available to re-enable those cores.

[0]https://en.wikipedia.org/wiki/Double-sided_disk


Oh, yes, the Phenom II 550 Black Edition. The early ones like the one I still have stashed away were an unsung bargain. My motherboard has a BIOS option to unlock extra cores, and so I got four cores for the price of two; at the stock clock speed all cores were rock-solid stable. I never did find any unbuffered 4GB ECC RAM sticks, so I had to settle for 4x1GB.

With 16 GB and a SATA3 controller or PCIE-to-NVMe adapter it would still be a respectable performer today, in spite of the DDR2.


Some would just have been faulty units : tried to turn my tri-core into a 4-core, but it didn't work...


Wow! Old man and the puncher ... those were the days.


I used this trick, and it did indeed work as intended. It was a little scary ("You want me to draw on my CPU?!?"), though.


The Duron had a set of circuit traces exposed on the ceramic chip carrier that were cut by laser at the factory to set the maximum allowed clock multiplier (and thus its final speed). You could reconnect all the cut traces with pencil lines and use a motherboard that let you set the multiplier directly to overclock freely.


it was cheaper for AMD to produce one chip and then selectively cut the multiplier lines to differentiate sku for different market segments, so if you closed the lines again you'd get a different, faster clock multiplier


What about athalons?


Was just helping a relative get their system back up and running after being unplugged for a bit.

Celeron 950MHz, 512MB RAM, 20GB HDD, 12x CD-ROM (not DVD :), floppy, add-on IDE Ethernet card. Win XP actually.

We have an unused Win8 desktop that we'll be refurbishing and gifting to her soon.


I couldn't afford either.


You meant poorer, I presume. Poorerer might be something in a calculator?


The celeron was for the poorer. ;-)


It was a joke.


So was my comment.


I worked for a large insurance company, my first job, when SETI@Home was launched. It was a typical large cubicle farm. I remember sneaking around to any unused WindowsNT workstation around me and installing it. Good times, good times.


When I first got my i7 6-core with quad GTX-690s (2 GPUs per card), I naively let Seti@Home run rampant when the computer was idle. Room they were in got up to around 90F when I let it run 100%.

Sad to see they're shutting down.


I suspect in the very early days of cooperative computing, one could genuinely argue that there were "idle cycles" that were going to waste if a computer was powered on and not computing something. But this became more and more inaccurate throughout the 1990s as new power management technologies became a part of computer designs, and is completely wrong now. Phenomena like Bitcoin mining and feeling your laptop get warmer depending on what it's doing show that easily, as does your room-heating experience.

I'm still curious about the extent to which the claim of idle compute power going to waste was true when various projects were set up. For example, the press release about the Cooperative Computing Awards (I've forgotten whether this was 1998 or 1999) says

> However, the computer industry produces millions of new computers each year, which sit idle much of the time, running screen savers or waiting for the user to do something.

https://www.eff.org/press/releases/eff-offers-cooperative-co...

The main thing I can think of that would have supported the concept of idle cycles going to waste was

https://en.wikipedia.org/wiki/HLT_(x86_instruction)

> [the CPU-idle instruction] was not specifically designed to reduce power consumption until the release of the Intel DX4 processor in 1994.

I participated in DESChall and distributed.net's RC5 challenge and I remember being told, and telling others, that we were only using resources that would otherwise have gone to waste. Were we already greatly mistaken about that in 1997?


I would think of compute resources "going to waste" in three separate groups:

- The baseline electricity the computer is using. This will get used and produce no S@H work either way (in an era where our computers didn't go to sleep)

- The added electricity to do S@H work. This certainly was not idle, but also was a relatively small amount relative to an entire household's use.

- The hardware itself.

I would say that S@H were making use of hardware that was "going to waste" (this is CAPEX the project didn't need to spend), was having volunteers donate the added electricity, and could be said to be making use of the baseline electricity inasmuch as it would be used either way, and if S@H were to set up their own compute cluster, they would also incur similar overhead.

I don't know what system efficiencies were like in the late 90's, but I would totally believe someone if they said "idle" was 40% the power consumption of "max usage". Using that number as an example, "donated electricity" gets (1/0.6=) 1.6x as much compute-per-watt as if it was donated to run an equivalent, dedicated, machine. Maybe you can convince yourself that you were using "idle cycles" and donating on top of that?


That's definitely not the way I thought of it at the time, but that seems like a useful way to describe it!


I may be wrong but back in 90s an idle processor used up just as much electricity as an actively computing one.


That's really part of what I was asking about. The claim from Wikipedia about Intel's HLT instruction shows that this started to change a lot by 1994, but I don't know if there are other similar considerations.


Before 2002 (AMD's Cool'n'Quiet) and 2005 (Intel's SpeedStep) CPUs didn't do any frequency scaling for power savings. When the CPU wasn't asleep it was running at it's full voltage and clock speed. While the HLT instruction kept let them 'idle' they only used marginally less power than when active.

So running distributed.net or Seti@Home was using unused processing capacity for a marginal increase in power draw. If you ran them overnight with your monitor (CRT typically) your computer was drawing less power than if you were sitting up all night on IRC or playing Quake. This changed once frequency scaling was commonplace on desktops.


If you use an electric radiator for heating, might as well use that electricity to compute something worthwhile !


For anyone that isn't aware, yes, they really are equally efficient at heating!


I'm surprised it lasted so long, the idle computer donation made way more sense back when processors didn't really change power consumption or downclock.


Wondered the same thing!

Perhaps "they" (the committee) were afraid that computers were getting powerful enough that we might just find something. Better pull the plug.


My Pentium II home PC identified a candidate signal for them in 2001! Sadly, I have no idea if the galaxy in question is still on their list of leads.


Or perhaps there's a lot more computing power demand for crypto coin mining that for aliens searching.


I ran it before I went to school in the morning and before I went to bed at night for a few days. Then the novelty wore off. Also, it interfered with my downloads. I also remember I watched Contact on tv around that time and it got me hyped about doing my part to discover extraterrestial life.


Had that same processor on my third computer.

Great era to build your own stuff.


Ah the fabled durito.


same


Not to sound like a broken record (I and others have said this or similar on these kinds of threads) but... I, personally, have become convinced that looking for signals this way is actually pointless.

The argument is basically this:

1. Within 1000 years (and maybe a lot less) we will have the engineering capability to build space habitats, powered by solar power. This last part is important because this thought experiment isn't gated on commercial viability of nuclear fusion power, which I'm not yet convinced is possible.

2. These space habitats are far more efficient at creating living area than planets. I forget the exact numbers but something like 1% of the mass of Mercury is enough to create enough living area for something like 10^16 to 10^18 people.

3. Space habitats are more convenient and cheaper to move between than leaving or even entering a gravity well like Earth's.

4. Roughly one billionth of the Sun's energy hits the Earth.

5. Once you have the ability to create one of these things, each becomes progressively easier.

This, of course, is the classic Dyson Swarm. Originally this was called a Dyson Sphere but this has led some to think it's a solid shell around a star. That was never the intent. Even if it was, no known or currently theorized material could support this.

Dyson Swarms are not subtle. Even a partial Dyson Swarm should be detectable as a large IR source compared to how much visible light is produced. This is because the only way for something in space to cool down is to radiate that heat away and physics determines the wavelength of that based on the temperature of the object.

Standard objection: what if you can recycle that heat? Well, you can't do that perfectly (as this would violate Thermodynamics) and even if you reduce IR emissions by 90%, you've simply reduced the IR emissions by one order of magnitude. For comparison, the Sun produces roughly 4x10^26 Watts of power.

So if you accept the above premises the gap between stabbing each other with swords and having this technology, at least for us, is 1000-2000 years, a cosmic blink of an eye to produce signals without the above IR signature. Those are long odds.

Personally I subscribe to the view that technological life is, at least within a billion light years of us, is likely quite rare.

The above is a very superficial summary of a topic that Isaac Arthur's channel goes into great depth about. I guarantee you any objection you have has at least one video that goes into that in great depth.


That's a fascinating comment, thank you.

I personally suspect the engineering challenges might pale next to the challenges of political organization.

Quite simply, the continued human expertise and organization necessary to manage and sustain such a system is far, far beyond anything we have today.

We can't even manage to globally reduce CO2, and the recent government responses to coronavirus have led a lot to be desired, to say the least.

Just because you put people up in space habitats doesn't mean they become any less power-hungry, any more cooperative, or any more peaceful.

You say we'll have the engineering in 1,000 years, and I could buy that. But read Aristotle's Politics from 2,400 years ago, which is concerned mainly with political stability and revolution, and he might as well be describing people today.

I'd like to see us manage "spaceship earth" a helluva lot better before I have even the remotest faith we could manage space habitats politically. Heck, we couldn't even manage Biosphere 2, remember?

[1] https://en.wikipedia.org/wiki/Biosphere_2


> Quite simply, the continued human expertise and organization necessary to manage and sustain such a system is far, far beyond anything we have today.

I get your concern but the differences are significant.

1. Habitats are mobile. If you don't like the neighbours you can up and move, something not possible with geography on Earth;

2. Land area is essentially infinite, not so on Earth.

> We can't even manage to globally reduce CO2

On the contrary, excessive CO2 is, at least in some context, a remarkably simple problem. We can extract CO2 from the atmosphere and make fuel. It just doesn't make any sense to do it because we need to burn fossil fuels to power the process.

So all we really need is a power source that's cheaper than fossil fuels. There are two obvious contenders for this:

1. Solar power. Estimates I've seen are that an orbital collector can be about 7 times as efficient as current Earth-based solar collectors.

2. Fusion power. Personally I'm not yet convinced if hydrogen fusion is viable. There are of course other proposals. We shall see.

You'll note that I don't include fission. It's an unpopular opinion on HN but until we have a good story for fuel and waste processing then this is a nonstarter.


>>1. Habitats are mobile. If you don't like the neighbours you can up and move, something not possible with geography on Earth;

That relies on the assumption that people within a single habitat wouldn't want to separate for whatever reason. Nation within a nation. It already happens on Earth(see: Catalonia, Taiwan, Hong-Kong, Silesia).


Getting my sci-fi hat on here but, it would in theory be easier for them to pool their money together and buy / build their own spaceship. Of course, this assumes that each ship is its own political entity; in practice, we'll probably see the equivalent of countries and states but in space, spanning territories and jurisdictions that span multiple habitats.


Why would separate spheres be any better than counties. One of the things that shot us up is the industrial revolution and that was mostly powered by cheap trade routes by land and ship.

As in trade is very essential for growth. Both of goods and ideas.


>1. Habitats are mobile. If you don't like the neighbours you can up and move, something not possible with geography on Earth;

Mobile? Are we talking O'Neal Cylinders here? While yes, technically they will be mobile, I highly doubt the reaction mass requirements to actually move a constructed space habitat on the scale of maintaining as near to self-sufficiency as possible a city size population would be terribly practical, especially if everyone was doing it wantonly.

Keep in mind the effect as of late with Moore's Law in processors, and realize that the same phenomena is likely to happen with spaveflight in that yes, the technology will scale in capacity, but a great deal of that capacity may be expended in rendering the technology "more accessible" to the less specialized consumer as opposed to remaining in a state where you're actually optimizing for maximum throughput theoretically possible.

Cars/motorcycles would be another example of the same phenomena I'm trying to get across in that we're trading technically superior performance of the motorcycle (maximally efficient, yet relatively hard to control and severe to lose control of) vs. The relative safety and friendliness offered by a car to an less skilled user, even if lugging around all the extra metal eats into the overall efficiency of attaining movement of small packets of material from point A to point B via motion attained through internal combustion processes.

Never mind the political issues. I fail to entertain enough optimism to realistically expect even getting beyond the ISS-like stage of space habitation within my lifetime at the rate we're going, and fully expect climate/ecological breakdowns to start creating too much conflict to really keep us moving forward in a cooperation friendly regime.

I'd love to be proven wrong though.


>> We can't even manage to globally reduce CO2

> On the contrary, excessive CO2 is, at least in some context, a remarkably simple problem.

I would argue that even though we have the technical skill, the fact we do not have the political capacity is a significant problem.

Given where humans came from, I’m sometimes surprised that our political thought processes functions at all on groups as large as cities, never mind nations or the plant as a whole. While I would like to hope that scaling it to the size of a Dyson swarm or larger would be fine, even at the level of one planet we do keep repeating cycles of genocides followed by people saying “never again” followed by more genocides elsewhere.


> but until we have a good story for fuel and waste processing then this is a nonstarter.

Waste processing in space? Just give it a slight push in the direction of the sun (or whatever the right direction is taking in consideration gravity) and forget about it.


> a slight push in the direction of the sun

That's not how orbital dynamics work; a slight push in any direction won't change the orbit enough for that thing to not be a problem anymore.

To make something fall into the sun, one needs quite a bit of delta-vee, which means energy. However, if the thing isn't in any hurry (and I imagine waste isn't), one can simply attach a solar sail with a tiny computer for steering it, and let it brake for the next few thousand years.

(Kerbal Space Program should be part of school curricula)


It's actually cheaper (in delta-V terms) to leave the Solar System than it is to hit the Sun [1].

[1]: https://www.reddit.com/r/askscience/comments/5lt50r/why_does...


Pushing something into the sun requires removing most of it's orbital velocity not just a simple push. A small push in an orbit will create something on a very similar orbit to you that absent a nearby body to induce a lot of drag will intersect with your orbit in 2 places (not at the same time you're there fortunately in pretty much every case).


"Quite simply, the continued human expertise and organization necessary to manage and sustain such a system is far, far beyond anything we have today."

You don't need "a system" for doing this. You only need one. 9,999 of 10,000 habitats could be staring into their naval, but the one building more will still be building more. So, I think you ought to flip the intuition around; it's not a question of keeping everyone on board, it's a question of what it takes to extinguish everyone's ability to do this, which, once it gets going, would basically amount to "a local supernova".

"I have even the remotest faith we could manage space habitats politically"

Again, I think maybe you're envisioning a world in which all the space habitats are just peacefully co-existing... but this is not a necessary precondition for them to exist, or propagate. Yes, they'll war, and they'll split, and they'll join into groups, and there will be diversity that makes the current diversity on Earth look like boring homogeneity, because there won't be such a strong force pulling people back towards a single species mean. It doesn't have to be utopia to exist.


You can see my other comment for clarification: https://news.ycombinator.com/item?id=22480827

It's not about habitats coexisting with each other or not, it's about them imploding internally because people develop factions, compete for resources internally, threaten brinkmanship with critical systems, and eventually someone goes too far -- like people do all the time in countries all around the globe today.

Except a company can go bankrupt and a political regime can be replaced without all their employees or citizens dying in the process, because none of them are responsible for maintaining earth's life support.


Yes, some of them will die in civil wars.

But not all of them. And the ones that survive have the opportunity to learn from the ones that didn't.

That's less different from Earth than you may think. We've witnessed entire countries collapsing even in the last few years. If it wasn't for Earth providing free air, reasonably cheap water, and not-that-difficult food even when your local political system collapses, nobody on Earth would even consider switching their country to run on Communism; rather than merely killing dozens of millions of people it would be witnessed as bringing total extinction to anyone who tried it.

Basically, evolution doesn't stop.


I find this comment actually more insightful than the GP.

How often have I talked with people and mentioned, that in terms of social behavior, we make such slow progress, if any. If we could end human greed and other bad things, how easy it woule be for us to go to space. If money did not count and all would work for a great goal of advancing into space, we would be there already.

What many call "human nature" (which I do not believe) is what holds us back.


Slow and painful indeed, but considering we are animals the progress is massive. Just take a look at how brutal nature is, social darwinisn is the standard, decisions made on instincts, putting yourself first.


TIL it was Steve Bannon that killed off Biosphere 2.



You make the case that political issues haven’t changed much isn’t he past 2400 years, but then go on to say you’d like to see us manage things here at home before expanding into space?

Those seem to be in conflict to me. I do agree that human engineering has developed far faster than human politics, but I think we need to just accept that this trend will continue.


The conflict is exactly what I'm pointing out.

It's pretty easy to imagine a minority of administrators on a space habitat decide they'll take some critical system hostage unless they get a bigger share of whatever they want because they think (perhaps legitimately) they've been the victim of some injustice... overplay their hand, critical system fails despite nobody actually intending it, millions of people in the habitat die. Or a million other ways for disaster to happen because of human reasons, not engineering ones.

Here on earth, even when societies devolve into civil war and massacre (e.g. Syria the past few years), at least they don't destroy the earth along with it. But with a space habitat, even a partial breakdown of order easily means everybody can just die.

So let me be clearer: I wouldn't just "like" to see us manage Earth better first. My point is that this may very likely actually prevent space habitats from being viable at all. "Accepting that this trend will continue" doesn't mean we'll make space habitats regardless -- it means accepting space habitats might not ever be thing (no matter how possible the engineering is), unless it ever happens that we first make enough massive progress in our forms of political organization and cooperation as well.


While I see your point, it should be noted that we are perfectly capable of killing everyone on Earth as well. The only reason this hasn't happened is because the whole world came together to be more sensible[1] about nuclear conflicts than we were about less destructive conflicts, and because of a good deal of dumb luck. And killing everyone on Earth will only ever get easier as technology advances. At least a distributed civilisation means one community self-destructing doesn't mean the end of humanity.

[1] Not a high bar, and not by much. Nevertheless, we are still (barely) here.


This is probably where GAI would be really useful.


Wow, I head never heard of Biosphere 2 (or 1 for that matter). That is a very interesting read, thank you! Did not expect to run into Steve Bannon in an article like this lol.


In the current political (capitalistic) climate, a project like this would only be pursued if there was a vested financial interest; they're pursuing visiting asteroids mainly to see if there's a lot of rare / valuable materials in them (e.g. platinum), and making getting into orbit cheaper for financial gains as well (decreasing costs, increasing volume; also, space tourism).


I don't think you even have to go that far.

Modern encrypted spread-spectrum digital radio would have looked like random noise until quite recently. I'm not sure if you'd pointed all the technology and all the brilliant minds that existed in 1930, at a modern cellular tower, that they'd have recognized the signals to be coming from a technological civilization.

In another century we'll no doubt be hugging the noise floor even more closely. It's not hard to imagine that even if alien civilizations are broadcasting omnidirectionally all around us, that we're simply too primitive to bother targeting with a message. Why waste the effort when you'd be able to communicate so much more information, and more efficiently, by targeting just a slightly higher baseline of receiving technology?


> Modern encrypted spread-spectrum digital radio would have looked like random noise until quite recently

Topics like this are discussed in Stanislaw Lems, Masters voice novel. A really good read.


I always thought the more realistic answer was that the window to detect an emerging civilization is on the order of a couple decades. In that time they go from simple easily detectable modulation schemes that are broadcast at very high power to modulation schemes that are nearly impossible to differentiate from noise at much higher efficiency/lower power levels. Much like we have, in the 1950's it was all high power TV/Radio signals which were radiating into space on fairly narrow bands. Now days we have really wide band signals using really advanced modulation techniques frequently sitting at or below the noise floor (GPS!) and frequently very short range (5G!).

Then using your Drake equation variables calculate how many are within those couple decades at any given point that you can see.


Radar. That's the RF that's easiest to detect from long distances, it's focused, and it's strong.

The hard part is deciding if it's natural or artificial.


Top of my head, so I may be off, but: The Arecibo message was easily the easiest to detect by far. Its signal will become indistinguishable from background radiation at a distance of slightly over 625 light years.

The converse: any life out there trying their best to communicate with us, beyond 600 ly... we wouldn't realize it even if we happened to catch (what's left of) the signal.


> Its signal will become indistinguishable from background radiation at a distance of slightly over 625 light years.

Doesn't that depend on how large the receiving antenna is, because the larger the antenna the smaller the source angle you can focus on and the less background radiation will be in that source angle ?


That may not last much longer either. Transponders are more effective/useful for civilian purposes and the military needs to be stealthy. Big radar installations have a sign painted on them that says "take me out first".


What about radar for asteroid detection? https://en.wikipedia.org/wiki/Radar_astronomy


This is based on the current knowledge of physics that we have. We know that it is fairly incomplete.

~5% of the universe is made of the stuff that you, I, a cloud, and the sun are made of. We understand this stuff very well, in terms of cosmic scale stuff.

~20% of the universe if made of dark matter. We know that this stuff falls down and really doesn't do anything else. Does it interact with itself, with us, can we make sandwiches out of it? No clue.

~75% of the universe is made out of dark energy. We know that this stuff makes other things ... fall up (?). Well, kinda, maybe? We're totally at a loss as to what the super-majority of the universe is, let alone what it is doing, or what we can do with it.

Presumably, if there are other intelligences out there, they may know if they can do anything with the other ~95% of the universe and they may be making it into burritos or something. We're a long way away from concluding anything about anything.


It's not impossible that 95% of the universe is boring.

2% of the human body is the brain, and while poetry is written about other body parts, the brain is the one doing all the writing.

Atoms are something like 0.05% electrons, but the interactions and behavior of the electrons account for almost all of the interesting parts, and with notably rare exceptions, the nuclei might as well be point charge/masses.

Just because there is a lot of dark matter and dark energy doesn't mean it will be particularly interesting in the long run.


>Atoms are something like 0.05% electrons

Just in mass. In volume(orbitals), electric charge, importance for chemistry, the proportions are way different.


I mean, that's my point. There's nothing to say it won't be very interesting. We just don't know yet.


I would say that while the above numbers are certainly the current academic consensus, what also is part of the consensus is that this missing matter is interacting weakly and at very high energies. So for all intends and purposes it is mostly irrelevant baring some unforeseen breakthrough that lets us interact with it more strongly (for the same reason everything but the electric force is kind of irrelevant from an engineering perspective).


Only dark energy and matter are just a placeholders really. We aren't sure they exist, we are just seeing that our calculations are off without obvious reason.


>3. Space habitats are more convenient and cheaper to move between than leaving or even entering a gravity well like Earth's.

I have always thought that this would be a much more efficient way of living off Earth. Sci-fi wants us to believe that we can teraform planets. All they ever talk about are the atmosphere generators. However, they never address the reason the planet didn't have an atmosphere to begin with. To me, the research into generating artificial magnetic shields is the place to be. Generating those fields the size of a planet will be orders of magnitude more difficult than for smaller sized ships.


> Sci-fi wants us to believe that we can teraform planets.

Given or current sample size of 1, clearly we can.

It's the part where the teraforming produces a more habitable planet that we haven't gotten down yet.

But hey, this is only or first attempt!


I'm sorry, what? Are you claiming we have created a livable atmosphere on another planet? Where? Which one? Are you talking about Earth? Humans did not create the Earth's atmosphere. We can barely "fix" the damage we do to it. "Fixing" is not creating.


He is saying that we are currently transformong the earth's atmosphere into a less hospitable one through a process of pollution / emitting greenhouse gases.

In my view, given that we can do this, it is not unreasonble to assume that one day we will know enough to be able to tweak a planet's habitability in the opposite direction, or in other words, make an inhabitable planet habitable.


> "make an inhabitable planet habitable."

Aren't those the same thing? Shouldn't this be "make an uninhabitable planet habitable"?


Yeah, my mistake.


Why magnetic shields? If the planet is large enough, you get gravity for free. If not, a magnetic field wouldn't help with atmosphere. (Or am I missing something?)


The magnetic shield has nothing to do with trying to keep people on the surface.

The Earth's magnetic Van Allen belts deflect most of the sun's nasty radiation streaming at the planet. These belts are created by the molten iron core of the planet. Mars no longer has this magnetic shield, and the sun's radiation burned off the atmosphere. If we generate new atmosphere, it will just get burned off again unless we can generate some way of deflecting that radiation.


Aren’t most terraforming estimation put the time frame of atmosphere erosion at a few 100M years?

If we you can create an atmosphere it should last for a long enough period to be worth it, heck it might outlast humanity.


That still doesn't solve the radiation problem(at least on mars). What's the point of colonizing Mars if we'd all have to live underground anyway? Might as well try doing that on Earth(or the Moon) first.


I think it's for solar and cosmic radiation.


Or, you know, just use walls and glass.


Building habitable structures on a planet's surface is not the same as making a planet's entire environment capable of supporting humans. That's the point of teraforming which was the point of the comment.


The question is, why do you assume that advanced civilization would maintain 10^18 animal bodies to carry around brains? That's ridiculously inefficient. It's also unclear why you would expect an advanced civilization to be built on a large number of individuals either. That, too is derived from an animal impulse to procreate and maybe a reaction to mortality. Say that in the next thousand years we manage to move consciousness off body, and the experiences enabled by this make the greatest drug trips obsolete by comparison. Why would we choose to build space habitats for bodies when we could build space ships that use a fraction of the energy but allow us to explore the galaxy by slowing down the rate of experience.


I don’t assume that. If however your goal is a digital existence then you still have energy as the limiting factor and the best way to maximize this is likely a Matrioshka Brain, which from far away is largely indistinguishable from a Dyson Swarm.


And why would energy be the limiting factor? Maybe the limiting factor will be accidental entanglement with the environment of your quantum computer, and you want to cool your system waaay down to avoid thermals interfering with your finely tuned computation engine. Having a hot sphere around a star seems like a terrible set up to do that. Maybe sitting out in a hydrogen cloud and running well controlled fusion would be a much better for that.

I'm not saying one or the other is more or less likely, but the range of possibilities is vast and unimaginable. The idea of energy (in the sense you and I use it now) is not even 400 years old. Harnessing ever larger amounts of stored energy was the hallmark of the industrial revolution. Declaring it to be the fundamental law underlying civilization is like concluding that the earth is flat: You mistake the shape of our immediate environment (the recent past and short term future) for the structure of the overall thing.

We are quite definitely headed in the next couple of hundred years for an end to exponential growth in energy consumption. Exponential growth near where we've been would lead us to cooking the earth in 300-400 years. By the time we're capable of doing solar system scale engineering we will have had millennia of 'culture' that exists within a constant energy envelope. What these entities will value, or want to do is beyond anyone's guess.


Interesting. If I understand you correctly, you are saying that it makes more sense to look for large IR sources (without accompanying visible light), instead of looking for radio signals. Correct?


Correct. The signature of a large Dyson Swarm would be hard to mistake, particularly for anything natural, at least anything we've seen thus far.


I'm having trouble seeing what it is about this scenario that would make it look so different from a star with planets (which is still very hard for us to detect) or surrounded by clouds of smaller debris.

Is the idea that the habitats would be much larger than planets or that there would be far more of them for a given total volume of matter ?


I think this kind of thinking is roughly analogous to "why buy a phone now, when the latest model will be released next year"? Or, perhaps more on-topic, "why do any astronomical observation with this telescope when we will have a better one in a few years"?

I think the viability of efficient is a big "if", and will depends on a lot of factors (not just engineering ones, also social, environmental, and economical ones).


Haven't seen the channel but it's likely that there's other filters that we don't fully understand yet facing us in the future that prevents aliens from creating dyson swarms.

Because I'm an eternal optimist I tend to think of these filters as very positive, things like finding alternative sources of energy to stars, or reclusing themselves in some VR utopia.


> or reclusing themselves in some VR utopia.

Interesting that you might consider that a positive outcome. My immediate reaction is that it would be a very negative one. It feels like giving up - "it's not great out here, we can't make it better, might as well plug in and pretend."

Maybe I'm making the same mistake as people trying to apply home ec budgeting to national economies; if I just dropped everything and started playing video games it would be bad, but the same might not be true for entire civilizations who are further up the Kardashev scale than humans.


> Even a partial Dyson Swarm should be detectable

How big would the signal be compared to normal variance between stars?


> enough to create enough living area for something like 10^16 to 10^18 people.

Enough for them to do .. what? Trapped in a closed box, with no unexplored 'natural' non-engineered habitat, nothing to dig for, no unknown/unowned resources, no resources to use to escape.

Enough to let the cylinder owners extract money from their non-optional labours? Then it will certainly happen, but that doesn't make it look a happier or more desirable future.


> Even if it was, no known or currently theorized material could support this.

Which physical limits are most relevant to precluding the solid sphere? Tidal forces?


So if we step back a second, look at a Ringworld. I'm not sure if Larry Niven proposed this idea (in the book by the same name) or just popularized it, but the idea is basically this: If you take the mass of the Earth and turn it into a ring that encircles a star, spin it to create artificial gravity and put walls on the sides (on the inside) to keep an atmosphere in.

What do you have? The same mass of Earth will have a million times the living area.

What's the problem? At 1 AU distance, this thing would have to spin so fast (IIRC ~1.5 million km/h) that the centrifugal forces would tear it apart. Not even graphene could withstand the forces. So the idea just isn't practical.

So if you create a hard shell, you have to ask: what is the intended goal?

If there is to create living area, where is gravity coming from? Do you spin the sphere? If so, it has the same centrifugal force problem.

If it's not (and technically even if it is) then you have the problem that the shell will collapse under the gravity of the star and/or its own mass.


Wait, isn't the force "that would tear it apart" the same as the gravity you wanted to generate?


An O’Neil Cylinder is a few miles wide. Stainless steel can handle the rotation required to create earth like gravity. If we can practically produce graphemes instead you can go up to several thousand miles in diameter (McKendree Cylinders). 1 AU in diameter is something else entirely.


We can already practically produce graphemes -- just not graphenes. :-)


I think you are overestimating ease of building and living in space habitats as opposed to living on planet surfaces. If I were to guess, I'd say that any significant and detectable space habitats will be preceded by extensive colonization to nearby stars. Though now I'm oversimplifying space travel instead. I guess this problem is so complex that we can't even estimate orders of magnitude correctly.


There are several gaps in logic here but in my opinion the most remarkable assumption here is:

- the only way that alien life could be intelligent is if they exploit the resources of their solar system maximally.

Don't we have enough evidence yet that maximal exploitation of resources is a Very Bad Idea?


I wouldn't call using 1% of the mass of Mercury to create enough living area for a quintillion people and using most of the energy our Sun produces anyway to be analogous to anything you're alluding to on Earth thus far.

But the point here isn't to dive into the moral issues. It's not even to ask what MOST civilizations would do. It's simply to ask would ALL civilizations restrain themselves this way? If the answer is "no" then we'll find at least one. Saying no civilization would engage in this kind of growth when it looks entirely possible seems like far more of a stretch than the scenario I outlined.


My claim is that no civilization would engage in this kind of growth, yes.

Maximally exploiting your resources is not civilized.


We may have the technology to create massive habitats in the near future, but what if we do not have the need.

Population is projected to plateau in the near future. What if those habitats exist but just are not observable since they are quite small compared to their stars / planets?


Isaac Arthur deals with this at length in his videos. Those projections are unlikely to hold for long. Cultures that don't expand will quickly be replaced by those who do and the process continues. Population expansion is limited only really by resources.

You see this in all other species, but we somehow think it's different for us. That remains to be seen.


We think it's different for us because the reversal in population growth is correlated with education and development levels. These are factors which aren't present in animal populations.

If we lived in a world where every culture had achieved a certain level of education and development (and most all of them do want to get there), it's likely they would all have flat or declining populations. They might have to heavily subsidize reproduction just to sustain themselves and all this talk about building space habitats would be even more abstract than it is today.


This seems to currently be a thing. It's a little premature to say it's going to apply to all cultures across all time. In fact I'd take the opposite side of that bet.

The thing is if there's any culture for which that's not true, like Mormons, Orthodox Jews, some groups of Muslims, whatever. In a few generations they replace the cultures that don't have a lot of children like our own. Then expansion continues.

I could see expansion slowing even reversing temporarily, but it seems hard to imagine it permanently ending.


Well, while they're still dominant, all the other cultures might decide to band together and discourage, eradicate or dilute the one that keeps growing.

Anyway I agree that we can't say for certain whether population will grow or shrink. In fact in the context of this discussion I think that's the most important point. Many discussions about alien civilizations tend to assume that they'll evolve into huge multiplanetary populations with large energy/EM/whatever footprints. When in reality our most developed cultures are slowly declining in population. If an underground cache of brains in pleasure boxes is the end stage for most civs we will probably never know.


That's a good point, and one I often wonder about too. Maybe advanced civilizations are not expansionist. However it doesn't make a good solution to Fermi's Paradox because all civilizations would have to not be expansionist, if there were just a few exceptions they'd expand all over and we'd likely spot them.


This is simple linear thinking and a lack of imagination. As well as empirically wrong. Population expansion in humans ended because our culture adapted to the end of child mortality. Actually the massive expansion we saw and see was because culture took a few generations to do so. As soon as we conquer mortality as such our culture will shift again.

And that is only thinking about the impact of tech we might see this century.

Exponential growth is really a phenomenon of transitions. We are in a very rapid 10.000 year transition from biological system to cultural system. This transition accelerated further in the last two centuries. If we think about the end point of this transition it should be a system that is entirely defined by its culture rather than by its biology. Maybe even completely away from its biology. It's fun but idle to speculate what that culture will be like, but if experiences can be shared as easily as text today, what is the value of having billions of copies of similar experiences? What _is_ an individual in that situation?

Even if we accept your simple expansion/competition/ressources based model of reality* (and I see no reason we should) maybe combining consciousnesses leads to much better results. Maybe expansion will look like ever increasing use of energy resources to enable larger consciousness rather than maintaining more animal bodies.


Lots of wild assumptions you have there...


I think my comment contains speculation that is labelled as such, and historical observations. What assumptions do you mean?


Yes, but not for the reasons you outlined.

Use Occam's Razor: there are no aliens.

(Yes, I know I'm treading on some dearly-held religious beliefs now and will get lynched. Still, somebody had to say it.)


That's a bit too broad of a statement to be plausible. It's likely there are no aliens near us in time, space, technology, and desire to communicate to make such a search fruitful right now. There could very well be aliens on the other side of the galaxy that we can't see. There could have been aliens nearby a million years ago that are now extinct or elsewhere. There could be aliens right now that are pre-industrial or advanced enough to not communicate with methods that resolve on the bands SETI looks at. There could be aliens that follow a game theory mindset of non-communication ala The Dark Forest. Etc.

Space is ridiculously big. The universe is ridiculously old. Talking in absolutes about such things is a poor strategy.


The universe actually seems surprisingly young to me. If the universe is a mere 14 billion years old, then dinosaurs ruled the Earth for 1/50th of all time! And life has existed on Earth for 1/3rd of all time!

When I was a kid, I figured the Earth itself would be a blip on a universal timescale. It's kind of disappointing that geological and astronomical timescales are so similar.

Of course, it doesn't really change your point. If a billion years is a surprisingly short period of time, then 21 years is an instant.


> desire to communicate

This was my original point: you can’t hide a Dyson Swarm. It goes the other way too. You can’t really hide from a K2 civilization either.

We’re getting into Fermi Paradox territory here. The beauty of a lot of arguments here is that you don’t need to prove or even assume that civilizations will, on average, “hide” because it just takes one to be detected. So the arguments is actually “do ALL civilizations hide?”

This is a much harder argument to make.

> There could have been aliens nearby a million years ago that are now extinct or elsewhere

In Fermi Paradox terms, this questioned is framed as “is there a Great Filter ahead of us?” This question predictably has a lot of discussion. It’s hard to predict what that might be because even apocalyptic scenarios other than extreme bad luck (eg a nearby star going supernova) are unlikely to completely wipe us out at this point, even nuclear Armageddon.


Okay, let me qualify: there are no aliens in the visible universe.

Of course what exists or doesn't exist in the universe that's not visible is pure unscientific speculation; might as well be made of blue cheese as far as science is concerned.

> Space is ridiculously big. The universe is ridiculously old.

This isn't true. Mathematical concepts of infinities and limits are "ridiculously big". Space is finite and thus ridiculously small.


> Of course what exists or doesn't exist in the universe that's not visible is pure unscientific speculation

If something can only be proven to a 99.999999% certainty, is that still "unscientific"? What if scientists predict something happening with only 0.1% chance, for example the likelihood of a particular asteroid hitting earth? Or what about 60%, like a typical weather report predicting rain.

Generally, the high, low, or intermediate likelihood of some event happening does not make it unscientific.

While it would be unscientific to simply say that aliens exist, it's plenty scientific to consider the rate of star formation and age, conditions for life around stars, and time it takes to develop intelligent life to say that there is a particular likelihood of there being alien life.


> While it would be unscientific to simply say that aliens exist, it's plenty scientific to consider the rate of star formation and age, conditions for life around stars, and time it takes to develop intelligent life to say that there is a particular likelihood of there being alien life.

Like I said in another comment below - you're assuming information is distributed uniformly across the universe, which we don't really know and can't assume because it certainly isn't in the solar system.


> Yes, I know I'm treading on some dearly-held religious beliefs ....

> what exists or doesn't exist in the universe that's not visible is pure unscientific speculation

I don't see anyone saying that the existence of aliens aren't speculation or that "aliens exists" (or your: "aliens don't exist") isn't just a personal belief as stupid to argue about as religion. But as countless wars have shown, people (like you) will try to create conflict from differing personal beliefs. Just because you drop "Occam's razor" doesn't make your belief the correct one.


OK, you know that whole big bang thing? That didn't start with a singularity the size of a golf ball, that was an infinitely large singularity that "exploded" and got bigger.

The singularity was infinite. Space is infinity, but bigger.

Everything you said is unsupportable


Like I said, I'm talking about observable space. Science only talks about things that can be observed (by definition), so as far as science is concerned, we must assume that we are alone in the (observable) universe until further evidence.

What's out there outside the observable universe is fun to speculate about, but it's armchair philosophy, not science. Maybe there's an infinite variety of life and somewhere out there right now Gandalf is wielding a light saber while riding a dinosaur. Maybe we're alone in the universe because the Earth so happens to sit right on the only information complexity singularity in the universe. (We really don't know anything about the information structure of the universe yet; all the "dude, the universe is really big" arguments assume that information is distributed uniformly across the universe, but this obviously doesn't match observed reality.)


I think you're using Occam's Razor incorrectly here.

Here are all the facts:

1) There's an incomprehensibly large amount of stars and planets in the universe

2) On one of these planets (Earth), there are conditions which have lead to the appearance of life

3) These conditions are not unique to this planet

We don't know of any fact that would exclude the possibility of life coming about on other planets with similar conditions, so using Occam's Razor actually tells us that there's probably lots of other life out there.


Point 1 definitely isn't true. (Observable) space is actually tiny and young.


> space habitats

What about gravity? Is it possible/feasible to have long-term habitats without any meaningful gravity?


You spin them to create “gravity”, which is really just centrifugal force.


So what sort of geometry would they have, toroidal, cylindrical ? I was initially thinking you meant they would be spheres but I'm not sure how you would generate gravity on a sphere, unless only the equatorial region was inhabited.


The type of habitat he is describing is known as an O'Neil cylinder. The wikipedia page for the topic probably answers most questions you have about it.

https://en.wikipedia.org/wiki/O'Neill_cylinder


Fascinating. Thank you for writing this comment.


Seems a bit misleading, they're not distributing more work units, not closing the research center. Nonetheless, this is sad news to me. My entire career path in the tech world was because I got a campus job working for SETI@home.


agreed - I think the title should be updated to call out the "app" shuts down.


Anectdata: I set up an old computer with this in 2009, but a few months later I heard about this thing called cryptocurrency mining...

EDIT: The article notes that they are reaching a point of diminishing returns with the distributed nature of the SETI@HOME tasks, but I can't help wondering if things like crypto had an impact on participation with this and other @HOME-style academic projects[0].

[0]: https://boinc.berkeley.edu/projects.php


I'm not an expert in either this or cryptocurrency, but it always seemed like these two would be a good marriage of technologies rather than having crypto mining rigs waste resources on unproductive proof of work algorithms. Does anyone know why that wasn't feasible or didn't catch on?


Proof of work relies on computations that are hard to do but easy to check that someone has done. The entire network has to check someone's proposed result. There are very few computations that have sufficient asymmetry to the degree needed to make bitcoin work, and the types of things that SETI@home needs don't fit that bill.


Isn't Folding@Home trying to run an asymmetrical computation, though? The broad strokes as I understand them are that you have a sequence of amines, you arrange them into a three-dimensional protein (the hard part) by whatever method, and then you evaluate the energy bound within your structure (the easier part?). Lower energy is better.

The obvious problem is that there's no judgment of "correct" or "incorrect". Analogizing to a bitcoin proof-of-work, it would be as if a hash beginning with more zeroes always outranked a hash beginning with fewer zeroes, but there was no way to know whether your hash started with "enough" zeroes or not.


Protein folding might work as a proof-of-work algorithm. One big problem is of course that it's useful work that could conceivably be monetized.

Proof-of-work works because a theoretical attacker would have to invest huge resources into an attack. For bitcoin it's trivial to say "for attack scenario X an attack has to spend at least Y", and as long as the expected reward is lower than Y the network is safe (that's for example one factor you use to determine how many confirmations you wait before accepting a payment). If the attacker could conceivably gain money outside the cryptocurrency for performing the proof-of-work, that makes the whole calculation less predictable, and likely exposes you to more situations where an attack is actually profitable.


What's needed is a large delta between the value of the work and the value of the currency. It's not a requirement that it be completely wasted computation, that's just how it happens to work right now.

Protein folding, where the value is speculative, cumulative, and difficult to cash out, is an excellent candidate. Especially since the ability to realize the value of a given fold is diminished by openly publishing it, which is a requirement of proof of work; the folding computation thus becomes a positive externality of the cryptocurrency, since everyone can use that knowledge as an economic stimulus.

I'd like to see someone try it.


Isn't "maybe I can monetize this protein" less of a sure bet than "I can rewrite these transactions" ?


An attack on proof-of-work wouldn't let you rewrite arbitrary transactions. At most an attacker could exclude certain recent transactions—most likely their own—reverting payments which were previously deemed "complete" and replacing them with conflicting transactions. All transactions still need to be signed with the proper keys, however, so the attacker can't just lay claim to others' coins.


You could incorporate that in a quorum blockchain like Stellar (or Ripple I guess), but the whole point of those is that they remove the need for PoW so that's a bit useless. Maybe it's a cool way to build trust in such a network though..


RE: folding@home, How much money will the medical cartel make off of the fruits of all that free labor? Do users even know what they're folding? How do users know that they're not building bioweapons?


protein folding simulations don't produce a lot of money for pharma (I say that as somebody with a F@H publication to my name specifically doing research on GPCRs, which are a huge pharma target). users can see what the workunits are doing, and choose subprojects, presumably avoiding bioweapons if that's important to them.


How could the average technically savvy user, let's say willing to use Google a bit but not to obtain a biochemistry degree, tell which subprojects are possible bioweapons? Not a rhetorical question.


Typically, there is a stated/intended plan for each project. This page lists active projects: https://apps.foldingathome.org/psummary Unfortunately, the project pages return a 502 but Google cache helps: https://webcache.googleusercontent.com/search?q=cache:q6DvOt...

I'm going to guess that if you inspect each project it has a purported (positive) medical use and you'd have to make a judgement call on each project based on good knowledge of biology as to whether it had a bioweapons application.

I doubt, but cannot say for certain that F@H won't have any projects that are stated as bioweapons and I'm skeptical that somebody working on a bioweapon would try to cloak its intent, and come up with a plausible medical application so they could get free F@H cycles.

I remember the days of logging into national labs supercomputers and seeing the other jobs on the system, such as job_name "submarine_geometry_optimizer" and the user_name was "electric_boat" or something similar.


Thanks for answering my questions which were also not rhetorical.


For the most part this is fundamental research of the type that is typically funded from public grants, these are not pharma companies... As for what they are contributing too, it's all pretty well documented.


The projects Primecoin and Riecoin are digital currencies that try to provide proof of work algorithms with (somewhat) scientific value.


There's GridCoin, as well, which utilizes BOINC for work units.


Golem (www.golem.network) is building a general-purpose distributed computation platform where people who need compute power pay using cryptocurrency. And developers can build software to run on it (and get paid when it's used).


I vouched for your comment, as it's factually correct near as I can tell. I've been interested in the project, especially since Joanna Rutkowska joined them.


It's already been done in 2 projects. One called Curecoin and another called FoldingCoin, kinda niche i guess. didnt really catch on


There's GridCoin, as well, which utilizes BOINC for work units.


There were plenty, basically none of these savants understood economics so they didn't build a demand model, they only built a supply model. So people got paid in coin and just flooded the market until it was worth nothing.

Crypto mining rigs are proving far more productive than Seti or Folding@Home ever did. Economic pressures push the real energy heavy miners into places with renewable energy and also repurposing waste byproducts into electricity, so this is actually one of the most useful and cleanest sector on the planet, ironically the opposite of what people are saying. Protip: Headlines only tell you "how much" energy is being used and not the source, and this is very intentional.


I was in a similar boat to you, except I set up my new i7 rig (at the time) for SETI@HOME because I couldn't get mining to work properly, or it kept crashing!

Remnants of my attempts are at home on my PC somewhere, there are no wallets from my searches of my drives :-(.


Were you able to keep the wallet files well into the next decade? It was fraught with disaster back then, I read about a lot of regret.


I had the opposite problem. Knowing the importance of offsite backups I had the brilliant idea of backing up my wallet.dat on Dropbox without any additional encryption. My bitcoin were only worth a few bucks, who'd bother stealing them, right? Years later, I saw that BTC had gone up in price a few orders of magnitude so I tried to load my wallet.dat and cash out. Except when I opened my wallet file, the balance was zeroed out. Oops: https://www.troyhunt.com/the-dropbox-hack-is-real/


So someone internally stole them at dropbox? Did you have it shared with the public?


Nah, my guess is that whoever hacked Dropbox (or someone who bought the info from the person who hacked it) had a script to brute force passwords, then log into accounts and scan for wallet.dat files.


Did you ever end up tracking where and when the coins went?


I looked at the wallet address, which looked liked it was used to slurp up BTC from a number of different sources. Didn't bother tracing it any further than that because I was so disgusted with myself for losing (potentially) thousands of dollars. Good news is that I signed up for Coinbase when they were giving out 0.1 BTC signup bonuses, so I still got to ride the wave with that. Made me feel better about writing off the BTC I mined the hard way on my Macbook Pro— better to think of it as an expensive way to learn about the importance of good passwords and 2 factor authentication :)


How did that do for you?


Just the other day I saw a post about the current status [0] of the Voyager probes, the farthest man-made objects that we have sent out so far. [1]

It made me wonder about the possibility of an alien species discovering them. Probably infinitesimal, barring divine intervention. I mean, what are the chances of us stumbling upon similar probes sent out by another species who had no idea we were here? Do we even have the tech to detect something so small entering our solar system, even if it were to pass close to Earth?

[0] https://voyager.jpl.nasa.gov/mission/status/

[1] https://en.wikipedia.org/wiki/List_of_artificial_objects_lea...


I have wondered about folding@home as well, another distributed project begun in another age that seems not, from the outside, to have lived up to its early hype. In the meantime, it seems that Alphafold has surpassed conventional efforts at understanding protein folding, including folding@home's. https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp...


The two approaches aren't directly comparable. Alphafold predicts the optimum structure for a sequence, while folding@home explores protein dynamics (how a protein's structure changes over time).


I can remember my high school science teacher sometime between 2001-2002 telling us about this project and you'd see this running on his computer any time he wasn't using it.

I always wondered what came out of it and how cool it felt to feel like you could, as an individual, help support something massive by doing something so small.


>>I always wondered what came out of it and how cool it felt to feel like you could, as an individual, help support something massive by doing something so small.

You can still participate in Folding@Home, which in my opinion is a far more worthy pursuit - actual medical science came out of it and more is still being produced.


I had the brilliant experience of visiting the Allen Telescope Array and meeting Jill and her husband before the announcement of their $100MM donation a few years ago. I have to say that it is an incredibly lean operation ran by the smartest people I've ever met. The @HOME project was more about being able to help handle throughput they would've otherwise just had to dump. I think it makes sense to sit back and spend time reevaluating what's been collected such that the filters can be evolved into something more efficient. I don't think they've spent time seriously trying to apply AI, yet. I'd be eager to hear their opinions on the subject.


There was a promise at the time (still active, I'm sure) that if your computer discovered a signal from ET, you would be listed as co-discoverer of alien life. I still remember watching the screensaver crunch through work units with the tiny but real hope that the data on my computer had that signal.

Even at the time, the prize seemed a little unearned, since all you had to do was click through an installer. But it was still an incentive. Probably the greatest lottery prize that's ever been offered.


I don't think the discovery was required to be ET. I think that if the blip your system discovered was anything significant (pulsar or something), you'd get the credit. It was after all your machine and electricity that you paid for to do the discovery. It was the precursor to crypto proof of work.


I worked in a debt collection center where all the call center people used Linux workstations running wy60 terminals and a custom check viewer I wrote in Java (had to reverse engineer our vendors that I did get working in Wine, but it crashed a lot).

Installations and updates were automated. After getting the company logo and xplanet in the background, I got permission to put SETI@Home and FOLDING@Home on the machines (I think I did half and half). After all the machines were barely doing anything other than terminals and rendering TIFFs. Our company name was listed as the team.

Looks like the stats are off line for SETI@Home teams .. wonder how much that cluster kept contributing after I left.


Ah, good memories of being temporarily banned in the EE labs at UT Austin for running set@home on a bunch of the unix boxes. Must have been around 2000.


I got chewed out at my workplace around 1999 or so. We had a couple hundred SUN workstations on the network and anyone could run commands remotely on any workstation. So I set up a cron job to spawn SETI@home jobs on every workstation every night at 9pm, shutting them down at 6am the next morning.

I was way up there in the rankings for a while, until the sysadmins started investigating the strange traffic patterns...


sysadmins hate you.

Surprised you didn’t just use TACC credits for UT students.


I had just left Berkeley when this was released. I remember installing it on every computer I had access to, which was a lot, because I was in IT at a startup so I could put it on every desktop we had. And every server we had (but of course at nice 19, until management found out).

For a while we were in the top 10, until much bigger companies got in on it.

Kinda sad to see it shut down, but to be fair, I haven't run it in years.


End of an era for sure. But then there is this I guess: Something in Deep Space Is Sending Signals to Earth in Steady 16-Day Cycles: https://www.vice.com/en_us/article/wxexwz/something-in-deep-...


I'm a little sad about this. My dad works in computer storage and had access to lots of computers doing nothing. He's now at "just about 11 quintillion flops, 99.12% ranking vs all other users"


"For those who wish to donate their CPU resources, SETI@home suggests users select another BOINC project that also supports distributed computing."

https://boinc.berkeley.edu/projects.php

I loved the concept of SETI@home and ran it on a bunch of computers at my university when I was a student.


I just bought a powerful mac mini, but due to other projects arising, it sits idle. I am curious if anyone has suggestions.

I looked through a dozen or so BOINC projects, but they all seem dead. The amicable pairs project had some 2020 results and activity in its forum, but I couldn’t draw a line to any practical goal.

That said, maybe the sorts of projects that have practical outcomes don’t need to rely on the computing power of strangers.



"Distributed computing project, Rosetta@Home, is using the BOINC infrastructure to model covid-19 proteins that may be drug targets. " - https://www.reddit.com/r/COVID19/comments/f5as77/distributed...

I have run BOINC on my own computers for about 20 years. I really like it because it always properly steps out of the way whenever the computer is being used for something else (including config to do that when simply other processes need more resource), and because it's cross-platform.


"The owner of this website (www.bleepingcomputer.com) has banned the autonomous system number (ASN) your IP address is in (20473) from accessing this website." - thanks, owner...


Ah the excitement of seeing those lines trace across my old Mac’s screen in the 1990s. Would I be the one to find the pulse?

... no


Did SETI@home accomplish anything useful with its 21 years of computing? Outside of the general "it's always good to do science, even with negative results" which I agree with.

I ran a large cluster of university machines surreptitiously for this project many years ago.



So, is this all they did in 21 years? Funny people don't criticize it for wasting electricity when people complain about how Bitcoin wastes so much which seems like a far more practical utility than SETI.


I figured it was basically killed by improved power management. When it started, a lot of computers didn’t sleep because they might not wake up.


Memories of SETI activity on my family's first desktop computer screensaver and my dad's work laptop.

I don't have much to add other than that this is one of my first memories of computers being useful for the world outside my own.


Oh, sad to hear. I remember having SETI@home running on a couple of systems when in idle. Heck, my PS3 had some SETI@home background-task running when not in active use. Good times.


Does anyone remember about 18 years ago when some well meaning, but ignorant people got charged with electricity theft crimes for installing distributed.net on their work's PCs?

https://www.bloomberg.com/news/articles/2002-01-23/plea-agre...


We had a well-intended kid at a military facility in the early 2000s who loaded this on every computer he could log into. He genuinely thought he was doing a good thing for the world. Maybe he did. But the military network admins were none to pleased and he got somewhat more than a stern talking-to. Felt genuinely bad for him.


Does that mean folding@home is next? I think the screensaver dying out eventually killed it. Also cloud computing.


Ah, I used to fold. Sometimes I regret that I'm too young to hve enjoyed the early 'glory days' of personal computing, but thinking about it now, buying CustomPC magazine and finding inside it the team's folding@home league table will sound pretty weird one day, if it doesn't already.


Honestly, I never cared much about SETI@home, always felt like an empty waste of resources. But folding@home or even just training LeelaChess by community really feels like a noble cause I am willing to spend some electricity for.


Well, SETI@home would have predated folding@home, so people stuck with what they knew.


I doubt it, they're actually recruiting people right now to help with work on COVID-19 proteins.


Also laptops.


The next would be work@home (aka "freelance")


This reminded me of Dreamlab, which is an interesting adaptation of the same idea to mobile phones. https://www.vodafone.com.au/foundation/dreamlab


A lot of nostalgia in this thread but don't be too sad to see it go. It was only ever a waste of energy for everyone involved. If you want to put your extra processor cycles to a good cause, there are much more worthy projects like Folding@home


I wonder how much $/co2 of power was spent during the lifetime of the project.


This is why I never did it. I remember thinking it was really cool back when I was in high school, but could never justify the electricity use.


I ran BOINC in winter; as it's not wasting if you're already using heaters.


Man alive. I first ran some SETI@home stuff on my gaming rig in 2001. After a year or two I figured it was wasted energy. Surely they had more efficient rigs helping out than my lowly Unreal 2004 machine, right?


If they were smart, they'd just quietly switch to bitcoin mining units.


Ahh but we did switch to mining bitcoin. We're just not allowed to tell you about it.


Or mining BLE beacons with Nodle.io


Back in 2000, I was working for a small dotcom startup and I remember we came into work one day to discover that our production servers were all running at a crawl & unable to serve traffic. Turned out that one of the devs had installed seti@home on all our servers (this was before devops was a thing) and it was hogging 90%+ of the cpu. I think he'd installed seti on every machine in the office that he could get access to, even other people's workstations. He was utterly obsessed with the thing.


Back when HTC still made cool things, they had an Android app that would let you contribute your phone's idle cycles towards various projects, of which I think SETI was an option.[0] I remember running it my HTC One M7 for awhile.

https://play.google.com/store/apps/details?id=com.htc.ptg&hl...


So, what about the hypothesis that aliens 'released' the concept of crypto currencies on Earth, resulting in that idling computers and whole data centers would start mining bitcoins instead of investigating signals from extra-terrestrial life forms? It'd be a nice plot, trying to hide themselves a bit longer from our search...


I wonder what fraction of the work was done in the final year. BOINC capacity has expanded 600% in the last 5 years, and it's still not even all that large (fewer than a million hosts, average resources per host similar to a desktop computer). The raw idle capacity of the world's computers is far larger.


My high school physics teacher offered one point of extra credit per n work units. I had a fairly beefy computer, for a high school kid, and since he never put a ceiling on the number of points offered, I ended up accumulating up enough extra credit to bump my grade up a whole letter.


I remember everybody at my work (Software Engineering) setting this up in 2002. The idea was kind of obvious, but at the same time so far ahead of its time.

It would be great if a similar project was set up to provide everyday, real world services like, say, internet search, or a distributed S3 clone.


Considering how much carbon footprint it creates, I'm not sure the trade off was worth it. I'm happy they're shutting it down for now. Maybe once everyone is on renewables/cleaner energy sources it'd be a project worth restarting for using idle capacity.


No Aliens found unfortunately :(


None of the type that they're allowed to tell you about :)


The Capitalized type are not the good type...


I suspect a conspiracy. Involving bees and secret Arctic greenhouses...


ET must use encryption that looks like natural signals.


I take a lot of screenshots for work requests, and I tend to save them... hoping that 20 years on, they are as interesting as these screenshots.


Isn't it a bit early given that we've just entered the era of machine learning which could be a big help for the project?


I wonder what the aggregate carbon footprint and electricity bill was over the project's life. It has to be a lot, right?


Got to see the Seti Burst computer rack at Arecibo in December along with a lot of other stuff. Glad to have that memory.


If you're looking for alternatives besides the BOINC stuff, there is distributed.net.


Loved seti as a teenager and in my college years, always felt cool when it was running.


Wow, end of an era. I remember asking my mom to install it on her computer.


I ran it on my android and had contributed from almost past 3 years.


Ah man... it's just old enough to get a legal drink in the USA.


I am sure there will be no conspiracy theories from this news


> I am sure there will be no conspiracy theories from this news

Well, yeah, because the government and military industrial complex censored all of it. But don't worry, Q Anon will figure it out for us.


Soooo, probably a lot of energy wasted for nothing?


Is it the best way to use such a famous brand name? Not just sad but also feel abandon for those in.

Anyway may be they have found the alien and not telling us directly. Otherwise why stop? Just strange.


Wow, I've been using it for 21 years?!


I remember running it back in the day. Sad.


And how many aliens did they discover?


ALIENS ON THE BLOCKCHAIN


2020 is one messy year


or they have already found em aliens

twiddles thumbs


I guess they found what they were looking for.


did they find anything?


Bummer. I have my laptop set to not sleep while plugged into power so it can do work while I’m not using it.

I guess I’ll switch over to another space or environmental project.


I always told people to do mersenne prime search instead of s@h.

https://www.mersenne.org/


Why is this better than SETI? What's the purpose of finding mersenne primes


IMO there isn't much of one, at least not enough for me to feel motivated to run Prime95 (the GIMPS client), but others may find the justifications here more convincing:

https://primes.utm.edu/notes/faq/why.html


> Why is this better than SETI? What's the purpose of finding mersenne primes

I hope you're kidding ...

We will probably never find or communicate with ET, so the effort is a waste of time and resources. Space is just too big.

But interesting numbers are already known to be valuable in group theory, encryption, and other areas. Golomb numbers are used today, for example:

https://en.wikipedia.org/wiki/Golomb_ruler

And my standard disclaimer for technological illiterates: "It's not all about you."


I think you are getting downvoted for your tone. I wondered the same thing as the parent post. I genuinely don’t about mersenne primes.


Prime95 was where it was at.


why is it?


F


I find it quite funny that people still think that we'll "find" ETs by looking for them.

ETs gave us their DNA. They are our ancestors. Which is why "contact" has been a reality since "our beginning" on Earth.

Physical contact will (officially) happen when humanity is ready for it, measured by the maturity of our shared consciousness.

Marina Jacobi is among those who currently are in direct and constant telepathic contact with various groups of ETs. She often channels them (there are many others who do this too). Check out her Quantum Manifestation Series to learn about what's going on: https://www.youtube.com/channel/UCKaW-6KuhVvEIX-oRG7cBZQ


...

This is HN -- you post unscientific crap and it will be called out -- that lady seems like a crackpot.

She does not channel or contact ET life.


It's unfortunate that you are so close minded. You will be surprised in a couple of years, at most.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: