Blu-rays can take up 25gb each, so just a decent collection of those could easily consume most of one of these drives. If you want to do basic model tuning in stable diffusion, each model variation can take 7gb. This level of storage would mean you could almost setup a versioning system for those. And finally, any work with uncompressed data, which can just be easier in general, could benefit from it.
Even with brand new 25TB 3.5" drives, it's 10 of them, each holding 1,000 movies, for a total of 20,000 hours of entertainment or, roughly, 2 years of uninterrupted watching.
I would love that, too, but it's not possible today. Everyone would offer 25 year warranties, close up shop in 5 years, and reopen as a new subsidy or company.
The only way I see it working is to hold some large portion of the revenue in a trust and relinquish it to the company over the warrantied lifespan. The company would have to operate at a loss for a while to books those reserves, so there would have top be something like a zero interest government loan to cover the cost, which can't be escaped through bankruptcy.
Or maybe a contract like the shitty cell phone plans in the US. Buyer agrees to pay for the full price of the appliance over the warrantied lifetime in installments. If you want to sell it or trade it in early, you either have to finish off the payment or transfer the contract. The company would have to service the product (within reason), or the contract is voided, releasing the buyer from payment obligations. Again, this system can be easily gamed, too, in today's market, but I just can't imagine a scenario that doesn't require a major paradigm shift.
I do a decent amount of 3d printing and I cannot count the number of random letter Amazon brands for filament that have popped up over the last year. Most are simply rebranded waste from larger manufacturers. Once the product gets below 3.5 stars, the brand disappears and a suspiciously similar new brand pops up with the same spool design and 20 5-star reviews overnight.
lol I wish it was this easy. I got through the simulator in 6yrs 11mos on the second try. At no point was hope above 40%, except once early on (ended at 33%).
The funny thing is that I had 1 conf paper, 1 major result, and 1 figure left over. That's a good year extra, so I assume a perfect game would be to get the 3x papers and GTFO (which is the second best outcome, after not enrolling). There were a couple folks I knew that made it out in 5 years, but more that took 7+. Our lab was notorious for taking over 10, which I skirted by.
Like others said, this was lacking outside events (social/political junk). Hopefully version 2 will take into account: at least 1 family death and 1 additional tragedy, at least two months lost to helping or waiting for help from another grad student or post doc (they did have the lab equipment breaking, which was good to see, but missed the lobbying for every little purchase), at least one scope change, a half dozen favors to gain some political cache, a few experiments and/or rewrites to satisfy faculty members that just read about a technical issue they should have known, but didn't so they're highly sensitive to it, at least 6 months of arranging the data/results in a way that faculty can understand, 3 months of arguing that the lab standard procedure for some basic component is a decade out of date, a few months worth of preparing premature data for unnecessary meetings, one (and it better be just one) instance of an offer to help getting waaaay out of control, the hope boost after your first big conference and subsequent conference hope drops, the drops with each thesis defense from folks a year younger, etc. There's more, but that's off the top of my head. Oh, and that slight boost in hope when you hear someone else has a worse problem than your current one. That's a fun one.
Tip for those interviewing - ignore all the year 1-3 folks. 1 and 2 are basically undergrads plus some extra classes. 3 probably hasn't hit the first pile of bullshit yet. Find a year 5 or 6 in your field and talk to them alone. There's a reason they generally don't have senior grad students at recruiting events, and it isn't because they're too busy. Talk to them long enough to get to their exhausted attempts to rationalize some aspect of the experience. If their demeanor doesn't change, you might be safe. If they start hemming and hawing, that's a problem. They haven't even gotten to a specific, non-personal problem and they're having trouble keeping up the facade. The layers are: 1) Hey, social event, I get to take my mind off lab problems. 2) Getting a little boost by talking to someone still excited. 3) The quiet whisper, "Let me give you some advice." 4) The realization that there's nothing but lab to talk about. That's the threshold. 5) The rationalization alpha - The view from 30,000 feet isn't terrible. 6) The rationalization beta - The rundown of broad problems they're having. This is the point where they will probably, as if by magic, remember that thing they were going to do needs to be done now. (I've got some analysis running I need to check, I need to feed some lab animals, I promised my parents I would call, I told a lab mate I'd help them with this thing and will be up all night, etc.) 7) The rationalization gamma - Specific cases of major problems they're seen other have. 8) The rationalization delta - Specific problems they're having.
The moment I saw the term "SEO" it was like a stopwatch ticked on until search was dead. It used to be frowned upon to do little tricks, like keywords in a tiny, transparent font picked up by crawlers, but not seen by users.
When gaming search engines became a profession, the end of search appeared on the horizon. Guess we're headed back to web rings and link indexes (which will be consolidated, heavily monetized, and abandoned). If we're lucky, we'll be back to dialup BBSes by the 2040s.
Yes. We can point to the exact moment in time when Google turned to the dark side. It was on August 9, 2006, when Eric Schmidt, CEO of Google, addressed the Search Engine Strategies conference.[1] This was the moment that SEO, now officially endorsed by Google's CEO, became respectable. Until then, SEO was considered to be a branch of the spam industry. There were conferences such as the Web Search Spam Squashing Summit in 2005 on how to kill it.
There was/is whitehat SEO: properly creating links to relevant content within a site, using keywords that help bots find relevant content, traversing your own site to make sure there aren't dead zones with content that will never be found.
Google should have leaned into just this small segment of tooling and done much more of a ban hammer on the bad actors. They didn't and a bunch of people have left. I use DuckDuckGo mostly and sometimes Google but never by default. I don't even like DDG that much but it's good enough.
SEO sends the message that you have a content writer, and, thus, that you are a well established business (The content that is required to be produced is more nefarious than having hamster marketing guys spin in wheels, but the nature of the work is irrelevant to Google).
I feel like the turning point for me was when Google removed the ability to always exclude certain sites from searches. I had a number of sites configured Ty always be excluded because the results were always useless. Ever since, the list of useless sites in my search results has been slowly creeping up.
It really seems high-quality search is fundamentally in opposition to serving ads, alas. (At least once every page in existence probably serves ads via the search operator's network.)
That's exactly what Brin and Page said in the paper where they presented Google...
"[W]e expect that advertising
funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers"
and
"[W]e believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm."
Both from The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page (1998)
... but more seriously: Yeah, their reasoning (now) isn't exactly going to be unbiased and/or unblemished by their own experience. They probably have the most extreme survivorship bias ever. (Not their fault, they just do because of whatever factors got them into the position they're in.)
Totally. Advertising is fundamentally about distracting someone so you can put money in your pocket without regard to the impact on that someone.
I have a lot of complaints about, say, McDonald's, but they make their money through giving people something of value to them. Advertising legitimizes the making of money in a way unrelated to value delivery. (The same is true about a lot of finance.)
When you combine that with up-and-to-the-right numerical goals and standard executive incentives, over time you pretty much guarantee what Doctorow calls "enshittification". Delivering value becomes at best a side effect of the system.
That's unrelated to the point GP is making. The point is that they are not incentivized to put poison into burgers to get money from the poison industry.
This is a good point. The problem is that it is very hard to make a living serving high-quality results which is likely why ad-funded search still dominates. The vast majority of the world will likely never want to pay for search on its own.
There are, of course, a few relatively successful paid general purpose search engines but these serve a niche demographic if you consider the world-wide scale of google et al. Possibly specialized search (we build one) will be able to thrive in the future, but these engines also serve niche markets in the end.
Thus, the real competition to ad-based search is not high quality search and that is likely why search results don't get better.
I think that there are current SEO practices that make sense that a search engine uses as a ranking signal, like accessibility (mobile friendliness, screen reader compatibility, +), time to load (performance, image optimization, no js bloat), security (https), no stuff appearing on top the content (modals), use of h tags and breadcrumbs to order content, srucured data for bots and more.
Of course people game the system, apply shady practices, sell courses with tips. It is a nuance topic, the name now is bastardized by marketing companies doing shady things, but on his core is just practices to create good websites and provide a good user experience
You might have stumbled onto a practical and useful application of LLMs. Yahoo (or whoever) could crawl the web, categorise each page, and provide a genuine "Index".
Like in the back of a book, or an old subject based card catalogue ... Dewy-decimalise the web :)
I've always felt that an index by subject would be more useful than string-match based searching. Of course, the index might rank links within each sub-sub--sub-sub...category with something like the original page-rank.
Now if Yahoo (or whoever) could avoid the enshitification trap ... imagine what a fabulous resource that could be.
Until ChatGPT came along, I figured it was inevitable that human curated search came back into ascendancy as the crawler model has become such a failure.
Now we can use ChatGPT to filter through Google's infested mess, but this double edged Sword of Damocles will be able to create infinite attempts to bury genuine content with ad spam.
I might pay like max. 3 EUR a year for this to get a search engine that gives the good results without ads, SEO spam and bogus clone sites.
That amount of money is probably more than Google now makes from my online presence because I adblock, block 3rd party cookies, tend to click "block" to everything including the idiotic "legitimate concern" and never ever click on ads.
Can't remember exactly where I saw it, but last number I saw said that Google makes ~$12 a year per user. Which begs the question... why have they not atleast tried a "Google Premium".
Fuck it, I'd pay $15 a year to have a Google search that puts as much effort into finding me the shit I actually want as Google does today in serving me BS ads I never pay attention to.
I always wanted to see just a "price transparency" aspect.
Tell me exactly how much the advertiser paid for his placement, and that's a hugely important signal here.
If I'm searching for weird hobby parts, even though it's a high purchase intent query, they're probably paying pennies per click.
But if you start searching, say, financial stuff and the ad placement figures start showing multiple dollars per click, it's a warning "these people are willing to spend THIS MUCH MONEY to present a message to you, this probably means there's something sketchy involved."
I know, for example, anything pertaining to insurance and financial products is highly likely to turn into a farm of cross-selling and personal-information harvesting, because the cost per acquisition is so high and the tendency for everyone to sell the information to everyone else is so great.
I'd argue most people do not even realize they click on ads. My mother! The ad call-outs got so inconspicuous that they're almost indistinguishable from ordinary search results. And if you don't realize that and just click on the top results, you're amongst the top profitable users.
This is an interesting point. User managed or curated indices offer unique advantages, especially when 'depth of coverage' is more important than 'breadth of coverage'. I believe that we are witnessing people shift away from demanding 'search breadth' as we speak, so someone might possibly decide to do this.
Everything else is effectively the influencer scene. Which is increasingly deplorable as well.
Anything with wide enough reach becomes cost-effective for gaming.
So one would have to return to a highly fragmented world to make gaming the system cost prohibitive.
And that would get us to a pre-Internet world. But then again, it’s not entirely unthinkable that we’re headed towards increasing Internet fragmentation if various governments get their way.
> If we're lucky, we'll be back to dialup BBSes by the 2040s.
Unfortunately, there's no going back - the closest we can get is BBS-via-SSH. The entire landline phone infrastructure is crumbling around us (or in many cases, completely gone). Voice calls are packet-switched now, rather than circuit-switched as in the past. The upshot is, fancy modulation techniques that made full-duplex 33.6k possible over voice-grade connections aren't going to work, and even good old Bell 103 (300 baud) may end up being problematic.
I'm not sure I can even get new landline phone service, and if so, it's going to be expensive - and the wire plant is an unmaintained mess. When I got my folks off their landline and onto VoIP some years back, their old landline had so much hum it was nearly unusable. Once the inside wiring was disconnected from the landline and connected to an ATA, the hum was gone. It wasn't our wiring.
I am trying something different that might work for you with aisearch.vip
The challenge will be staying true to not showing ads, respecting user privacy, and not requiring a subscription. So far, the only thing that works is free daily quota + pre-paid
And the people that put significant effort into their work will reupload it to other services, if they haven't already. Similarly, exceptional work is probably saved by random folks elsewhere.
I'm pretty active in r/3DPrinting and r/FixMyPrint, and strongly support the blackout. I've been avoiding Reddit entirely and getting my news from other sources, while also exploring the fediverse. I had no idea about the rhombik instance/magazine/whatever.
The core problem with the migration is that the information on where to go EXACTLY is hosted on the platform being boycotted. I decided to not visit Reddit anymore and watch for instances to pop up on the fediverse. If others do the same, the migration will be slow. Don't be discouraged if it takes a few days to pick up steam. There's another 3DPrinting magazine that has some users already. https://kbin.social/m/3dprinting@lemmy.world or https://lemmy.world/c/3dprinting
That leads to the second problem of on-boarding migrating users to a highly distributed platform. I've mentioned in the boycott server the need for "racks" (the subjects within instances are called magazines, this would be a collection). These would be moderated aggregators of instances, like the invisible step between lots of disparate subreddits. There would be no limit of the number of racks, so technically you could have a permutation of every associated magazine-instance combination. The purpose would be to have a single link new users can click on to get subscribed to a set of magazines all at once, basically making the federation concept seamless to less technical users while still highly flexible on the backend. I'm going to shoot the suggestion up the chain for kbin.
That leads to the third problem, which is all these alternatives are new and going through growing pains. Trying to add features comes second to keeping the service stable. I'm hoping others with more coding experience can assist kbin devs.
And I wanted to mention that last I heard (and saw evidence of), some of the main Lemmy devs were kinda garbage people (Tiananmen Square massacre supporters, not just questionable opinions on government). The more controversial instances have been defederated from the primary/intake server, but it's still worth mentioning. Kbin doesn't have that baggage, but there are only a couple devs, and really only one main dev, last I heard.
Kbin users can see and respond to Lemmy and Mastodon instances that are federated, so it has been the migration choice for most of the Reddit boycott groups.
edit: btw - I just looked at the comments on the one post. If you want to run a poll, fine, but most of the people protesting won't be there to vote. I thought the 3D printing subs were largely positive and supporting, if those comments represent the community I was supporting, I now have zero qualms about deleting my comments on Reddit.
What? You don't want to watch the full box set of Star Trek TNG between when you push a button and the animation ends? That ruins my plans of designing a scrollbar that plays 45 random tracks from you music library between lines scrolled.
OTOH, it IS very satisfying to realize you don't really need to do whatever requires those interactions in the first place.
It speaks to the more serious problem with vaccine hesitancy - the more virions (individual virus 'particles') that exist, the greater the chance of a mutation that finds a way around our defenses. It's just basic evolution.
The variants arise randomly and proliferate with the current major strains. The infected population adapts to reduce the transmission of the most virulent/contagious strains. Selective pressure (tug of war between infecting people and people fighting off the infection) increases on the new variants until one or a few maintain or exceed the transmission of the progenitor strain.
That's the problem with the, "I'll get over it/I'm not worried/My segment of the population doesn't die from it" mentality. The more infections - subclinical, asymptomatic, severe, fatal, undetected - the more rolls of the dice. We (humans) are selecting variants that are worse for us, hoping we can snuff out the infection before some key mutation that eludes our immune system and/or testing develops.
Two other thoughts with internal conflicts/points worth mentioning - First, recovered patients should be more resistant to new strains. Their immune systems threw everything at the virus to defeat it, so their response will be more diversified than those with mRNA vaccines targeting a specific protein sequence. (The magnitude and usefulness of the variations in immune response can negate that advantage.) Second, the reasons that a 'novel' virus is dangerous are that we, as a species, don't know if we can (naturally/innately), and we don't know how much the virus can change in protein sequence (to evade our defenses) or our response to infection.
Anyways, I'm rambling. Not a virologist, but a PhD (and as such, I think I know more about stuff than a really do) and that's how I think about it. :)
> That's the problem with the, "I'll get over it/I'm not worried/My segment of the population doesn't die from it" mentality. The more infections - subclinical, asymptomatic, severe, fatal, undetected - the more rolls of the dice.
The vaccines we're currently using don't prevent infection as they do not provide sterilising immunity. To quote[1] Sunetra Gupta[2], infectious disease epidemiologist and a professor of theoretical epidemiology at the Department of Zoology, University of Oxford:
> The vaccines that we currently employ appear to be highly effective in preventing life-threatening illness but do not meaningfully contribute to the maintenance of herd immunity.
> …we find ourselves trapped by the superannuated conviction that vaccines must block infection as well as disease.
Given that, how is contrasting those who've had vaccines with those who haven't of any relevance? We have other comments in the thread (like this one[3]) pointing out that it could be immuno-compromised people that are the main source of new variants. Right now, they're among the least likely people to have been vaccinated.
> We (humans) are selecting variants that are worse for us
I believe this to be a very common misconception. Evolution selects variants that are better at replicating. It does not follow that those are the worse for our health.
Imagine a variant that would effectively be very lethal to us: wouldn't it go extinct if the bearers die or even stay home sick before it can spread?
On the other hand the vaccines could be causing more stress on the virus as they generally aren't very good. I'm saying this as someone who had the virus then the vaccine.
I'm putting this delicately, but if we're worried about things that might cause cancer, we might want to look at the past 40 years of novel chemicals approved in household or environmental products.
I'm not saying short-term, high-intensity FDA et al. EUA is ironclad, but it's a helluva lot more stringent and reviewed than what the EPA approves (or fails to ban) due to "economic considerations."
So, if we're in the worrying mindset, prioritization seems important.
I kept waiting for it to get better than a light brown haze with flashes of green... but it never did. The fire looked okay, but even that looked weird at the beginning of the sequence.
What really made me laugh was the blue smoke followed by a headline implying that was all at night. The AI filled in pristine blue skies and inverted the wreckage colors. It actually made up history. I'm calling it: Unintentional automated disinformation.
Mutant ABC is the target. First Mutant A becomes the predominant strain. There might be some Mutant AB or AC in the population, but the controlling factor is currently Mutant A.
Eventually, Mutant A therapies/vaccines are introduced and Mutant A starts to lose its hold. Mutant AB isn't treated by the new treatments (or they're less effective, or Mutant AB goes undetected longer, etc.). Mutant AB becomes king of the hill. Repeat with Mutant ABC.
All this would assume that there's no Mutant B or Mutant BC or Mutant AC that increases in prevalence. All the mutations would have to build up over time or occur simultaneously by coincidence (or you could get a combination).
It's just basic evolution - organism follows the niche. We have a hard time recognizing it because generation turnover is so rapid. Instead of taking ~30 years for 2.5 offspring, it's a few days for millions of offspring.
I don't think mode of reproduction matters much when we're talking at this scale. While the virus creates stable clones across generations, I would think they're more susceptible to mutations and any viable mutation is quickly amplified exponentially. Just like the old high school bio experiment to demonstrate antibiotic resistance, only your respiratory mucus membranes are the medium. Those bacteria reproduced asexually.