I suspect the underlying problem is that the gap between legitimate use of gift cards and fraudulent use of gift cards is just not very large...
Years ago I briefly played around with "manufactured spend" (on credit cards, to earn frequent flyer miles).
There was one specific loophole, with one specific gift card provider, and it was a doozy. You could earn credit card points on spend, plus supermarket loyalty points on spend, by buying gift cards from one specific provider which could be cashed out at face value (ie no fee at all) immediately to a specific type of savings account.
So, of course, world+dog was buying these things like it was the end of the world.
As I sat in a hotel room one evening rubbing the security codes off the latest batch of cards before redeeming them one-by-one into my savings account, it dawned on me that what I was doing was basically indistinguishable from money laundering. Of course it was NOT money laundering, but it would take some time to explain exactly why not...
The loophole was closed relatively quickly, and the gift card provider gave up.
I did this ages ago to build up airline points and take a nice trip to the EU.
Back then, the trick was to get a generic Vanilla Visa or other prepaid credit card. A recent legal ruling meant they had to be run as a debit card for... reasons... I forget them.
But a lot of grocery stores would sell you a money order up to 500 bucks for under a dollar with a debit card (not a credit card).
So you'd call up the issuer and have them issue it a PIN. Then you'd run it as a debit card and buy a 500 dollar money order.
Subtract ~$5 for the GC and ~$1 for the MO and you could manufacter about 500 bucks in spend. And the best part? You could take that money order to your bank, deposit it, get the funds immediately, pay off your balance, then rebuy.
In one afternoon I earned enough points for a first class flight to a fancy European city, and eternal side eye from the grocery store clerks who were convinced I was up to something put couldn't put their finger on what.
>Back then, the trick was to get a generic Vanilla Visa or other prepaid credit card. A recent legal ruling meant they had to be run as a debit card for... reasons... I forget them.
Interchange fees, probably. Otherwise the credit card companies is taking a 2-3% cut.
>So you'd call up the issuer and have them issue it a PIN. Then you'd run it as a debit card and buy a 500 dollar money order.
I don't know how this ever could have worked considering that "cash-like transactions" are counted as cash advances, same as if you were to use your credit card at an ATM.
> considering that "cash-like transactions" are counted as cash advances, same as if you were to use your credit card at an ATM
Afaik, gift cards are more like fixed balance debit cards that happen to be runnable over a specific payment network (e.g. VISA, MC, AMEX) as credit cards
But at least a fair number of them will allow you to set a PIN, which then allows their use as normal debit cards
You're not running it as a credit card, and it's not a credit card -- you can't do a cash advance on a gift card. But they sold ones that were accepted anywhere visa or MC is accepted rather than specific stores.
> but it would take some time to explain exactly why not...
Not really:
"I'm churning credit cards for the rewards points. Here is the receipts where I use $10k from account A to purchase $10k worth of gift cards. Here is the statements where I deposit $10k of gift cards into account B. Here is the statement for the $10k wire from B to A. And here are the receipts for the next round of gift cards I purchased. Any further questions? I have $10k of gift cards to redeem."
> the gap between legitimate use of gift cards and fraudulent use of gift cards is just not very large.
And many legitimate uses of gift cards may actually have been fraudulent somewhere up the chain.
Imagine a scammer which sells their cards to real users (perhaps through one or more less-than-scrupulous intermediaries willing to buy them in crypto without asking too many questions). If the victim comes to their senses and somehow gets those cards reported and blocked as fraudulent, unsuspecting users will get into trouble.
> it dawned on me that what I was doing was basically indistinguishable from money laundering. Of course it was NOT money laundering
But it is money laundering, that's what manufacturing spend is. It's not money laundering to hide evidence of a crime, but it is money laundering for the purpose of hiding the fact that you didn't engage in commerce in the process of spending money on a credit card to earn a reward. It's indistinguishable, only because we criminalize behavior not only on its base but due to its intent.
They call it laundering because it takes "dirty" money and makes it "clean". That's not what happened here. The money was perfectly clean to begin with.
Which law do you think was being broken? I think the person is pretty clearly not defrauding the bank. Maybe the credit card company doesn't like it, but they almost certainly don't have that in writing because if they'd considered this possibility, they wouldn't have allowed it to be possible in the first place.
That's not what money laundering means. Where was the illegal activity that led to the money's existance? He just used a rewards loophole, didn't clean anything of actual "dirty" origin.
Not engaging in commerce to earn rewards isn't illegal, it's just an oversight on their part.
We criminalize behavior based on whatever we feel like, based on our cultural expectations of what is allowed. That's what "we criminalize behavior not only on its base but due to its intent" and "considering the context" is all about. That's why we have juries. We reserve the right to break the rules if public opinion allows, based on our feelings. It turns out that justice in practice is not so blind.
For example, we feel like it is fair for credit card companies to monopolize payment systems, charge fees to businesses, and use a portion of the money from this scheme to set up this bullshit reward point system.
But to undermine this system is criminal, because the system is established, but undermining it is novel and therefore disallowed. Any new way to play the game is breaking the rules, because the purpose of the system is what it does.
I wasn't trying to write a fully formed political dissertation, so I'm not really sure what you were expecting in response to this comment? My point was that the GP was describing their behavior as "indistinguishable from money laundering", because it technically is a form of money laundering (the act) even if it's not money laundering (the crime). Intent is what turns the act into a crime, specifically in the case of money laundering.
It's not illegal to buy a few beers every evening from a bar you own out of your own pocket, and then book that revenue, pay taxes on it, and then ultimately collect a distribution of the profits as the owner of the business. It is illegal to do the same thing if the money you took out of your pocket came from selling drugs.
I wish there were more open discussions about how "Journal Impact Factor" came to be so important.
It seems absurd that researchers fret about where to submit their work and are subsequently judged on the impact of said work based in large part on a metric privately controlled by Clarivate Analytics (via Web of Science/Journal Citation Reports).
It is almost unanimously agreed upon that impact factor is a flawed way of assessing scientific output, and there are a lot of ideas on how this could be done better. None of them have taken hold. Publishers are mostly a reputation cartel.
Clarivate does control it because they tend to have the best citation data, but the formula is simple and could be computed by using data freely accessible in Crossref. Crossref tends to under report forward citations though due to publishers not uniformly depositing data.
I suspect there may be a pattern, every time I hear on the radio that it's "World $x Day" I'm afraid I start wondering who's actually behind that specific press release and/or what funding and incentives are really in play...
(Genuine question) What's your current plan for when your cloud provider goes offline? Do you have a failover story, or it a case of "wait for them to come back online"?
> where I live in Cologne, there's typically a high speed train every 20 to 30 minutes to Frankfurt. If one train is delayed by 30 minutes, then suddenly you have two (ore more) trains right on top of eachother heading to the same destination, both on very very congested lines that theyre simultaneously trying to do repairs and expansions to. Those are the sorts of situations where it makes sense to just cancel the train, not because of metrics but because of actual track constraints.
Never mind congested lines, remember the trains are full of paying passengers!
(Let's assume both trains were more than half-full of passengers, which is fairly typical), what would you plan to do with the passengers on the cancelled train who can't get on the other train because there is literally no room for them?
I recently travelled on a badly-delayed ICE train (to Frankfurt Airport, as it happens) and it was running so late I ended up rebooking my flight from the stationary ICE because I lost confidence we would get to the airport in time for my flight.
> Calibri was supposedly easier to read by people with disabilities
I'd love to know how that was determined. Given that:
"If different fonts are best for different people, you might imagine that the solution to the fonts problem would be a preference setting to allow each user to select the font that’s best for them.
This solution will not work, for two reasons. First, previous research on user-interface customization has found that most users don’t use preference settings, but simply make do with the default.
Second, and worse, users don’t know what’s best for them, so they can’t choose the best font, even if they were given the option to customize their fonts. In this study, participants read 14% faster in their fastest font (314 WPM, on average) compared to their most preferred font (275 WPM, on average)"
> Second, and worse, users don’t know what’s best for them, so they can’t choose the best font, even if they were given the option to customize their fonts. In this study, participants read 14% faster in their fastest font (314 WPM, on average) compared to their most preferred font (275 WPM, on average)"
What you actually want to compare speed in the most preferred font to, to show that individual choice is or is not better than one-size-fits-all dictate, is speed in the font that would be chosen as the universal choice by whichever mechanism would be used (to show it is universally better, show that there is no universal font choice that would lead to the average user being faster than with their preferred font.)
All comparing each individual's preferred font to each individual's fastest is showing that an individualized test-based optimized font choice is better for reading speed than individual preference font choice, which I guess is interesting if you are committed to individualized choices, but not if the entire question is whether individual or centralized choices are superior.
The (ex-)scientist in me is looking for a controlled study, ideally published in a peer reviewed journal, looking at - how can I put this - actual data.
"A single-subject alternating treatment design was used to investigate the extent to which a specialized dyslexia font, OpenDyslexic, impacted reading rate or accuracy compared to two commonly used fonts when used with elementary students identified as having dyslexia. OpenDyslexic was compared to Arial and Times New Roman in three reading tasks: (a) letter naming, (b) word reading, and (c) nonsense word reading. Data were analyzed through visual analysis and improvement rate difference, a nonparametric measure of nonoverlap for comparing treatments. Results from this alternating treatment experiment show no improvement in reading rate or accuracy for individual students with dyslexia, as well as the group as a whole. While some students commented that the font was “new” or “different”, none of the participants reported preferring to read material presented in that font. These results indicate there may be no benefit for translating print materials to this font."
Advocacy for people with disabilities is important, but actual data may be even more important.
A meaningful testing of the differences between fonts is greatly complicated by the effect of the familiarity with the tested fonts.
The differences between individuals which perform better with different fonts may have nothing to do with the intrinsic qualities of the fonts but may be determined only by the previous experience of the tested subjects with the tested fonts or with other fonts that are very similar to the tested fonts.
Only if you measure reading speed differences between fonts with which the tested subjects are very familiar, e.g. by having read or written a variety of texts for one year or more, you can conclude that the speed differences may be caused by features of the font, and if the optimal fonts are different between users, then this is a real effect.
There are many fonts that have some characters which are not distinctive enough, so they have only subtle differences. When you read texts with such fonts you may confuse such characters frequently and deduce which is the correct character only from the context, causing you to linger over a word, but after reading many texts you may perceive automatically the inconspicuous differences between characters and read them correctly without confusions, at a higher speed.
Many older people, who have read great amounts of printed books, find the serif typefaces more legible, because these have been traditionally preferred in book texts. On the other hand, many younger people, whose reading experience has been provided mainly by computer/phone screens, where sans-serif fonts are preferred because of the low resolution of the screens, find sans-serif fonts more legible. This is clearly caused only by the familiarity with the tested fonts and does not provide information about the intrinsic qualities of the fonts.
Moreover, the resolution of most displays, even that of most 4k monitors, remains much lower than the resolution of printed paper and there are many classic typefaces that are poorly rendered on most computer monitors. To compare the legibility of the typefaces, one should use only very good monitors, so that some typefaces should not be handicapped. Otherwise, one should label the study as a study of the legibility as constrained by a certain display resolution. At low enough display resolutions, the fonts designed especially to avoid confusions between characters, like many of the fonts intended for programming, should outperform any others, while at high display resolutions the results may be very different.
> Moreover, the resolution of most displays, even that of most 4k monitors, remains much lower than the resolution of printed paper and there are many classic typefaces that are poorly rendered on most computer monitors. To compare the legibility of the typefaces, one should use only very good monitors, so that some typefaces should not be handicapped.
I'm afraid I assumed this particular part was a joke, but having read it several times I'm no longer sure ...
Assuming it's not a joke, what would you suggest to readers of content using any particular font who don't have "very good monitors"? What are they supposed to do instead? Not attempt to read the content? Save up for a better monitor?
I have written the above posting before reading the complete research paper linked by the previous poster.
After reading the complete paper, I have seen that the study is much worse than I had supposed based on its abstract.
This study is typical for the font legibility studies made by people without knowledge about typography. I find annoying that such studies are very frequent. Whoever wants to make such a study should consult some specialist before doing another useless study.
The authors claim that a positive feature of their study is the great diversity of fonts that they have tested: 16 fonts.
This claim is very false. All their fonts are just very minor variations derived from 4 or 5 basic types and even those basic types have only few relevant differences from Times New Roman and Arial.
All their fonts do not include any valuable innovation in typeface design made after WWII, and most fonts do not include any valuable innovation made after WWI. They include a geometric sans serif, which is a kind of typeface created after WWI, but this kind of typefaces is intended for packaging and advertising, not for bulk text, so its inclusion has little importance for a legibility test.
I would classify all their 16 typefaces as "typefaces that suck badly" from the PoV of legibility and I would never use any of them in my documents.
Obviously, other people may not agree with my opinion, but they should be first exposed to more varied kinds of typefaces, before forming an opinion about what they prefer, and not only to the low-diversity typefaces bundled with Windows.
After WWII, even if the (bad in my opinion) sans-serif typefaces similar to Helvetica/Arial have remained the most widespread, which have too simplified letter shapes, so that many letters are ambiguous, there have appeared also other kinds of sans-serif typefaces, which combine some of the features of older sans-serif typefaces with some of the features of serif typefaces.
In my opinion, such hybrid typefaces (e.g. Palatino Sans, Optima Nova, FF Meta, TheSans, Trajan Sans) are better than both the classic serif typefaces and the classic sans-serif typefaces.
The purpose of that research study wasn't to survey the entire history of sans-serif design(!), it was to answer a fairly focused question: does OpenDyslexic improve reading for the population it claims(or claimed) to help?
(In good faith) I'm trying really hard not to see this as an "argument from incredulity"[0] and I'm stuggling...
Full disclosure: natural sciences PhD, and a couple of (IMHO lame) published papers, and so I've seen the "inside" of how lab science is done, and is (sometimes) published. It's not pretty :/
If you've got a prompt, along the lines of: given some references, check their validity. It searches against the articles and URLs provided. You return "yes", "no", and let's also add "inconclusive", for each reference. Basic LLMs can do this much instruction following, just like in 99.99% of times they don't get 829 multiplied by 291 wrong when you ask them (nowadays). You'd prompt it to back all claims solely by search/external links showing exact matches and not use its own internal knowledge.
The fake references generated in the ICLR papers were I assume due to people asking a LLM to write parts of the related work section, not verify references. In that prompt it relies a lot on internal knowledge and spends a majority of time thinking about what the relevant subareas are and cutting edge is, probably. I suppose it omits a second-pass check. In the other case, you have the task of verifying references, which is mostly basic instruction following for advanced models that have web access. I think you'd run the risks of data poisoning and model timeout more than hallucinations.
> How different is this from rental car companies changing over their fleets?
New generations of GPUs leapfrog in efficiency (performance per watt) and vehicles don't? Cars don't get exponentially better every 2–3 years, meaning the second-hand market is alive and well. Some of us are quite happy driving older cars (two parked outside our home right now, both well over 100,000km driven).
If you have a datacentre with older hardware, and your competitor has the latest hardware, you face the same physical space constraints, same cooling and power bills as they do? Except they are "doing more" than you are...
The traditional framing would be cost per flop. At some point your total costs per flop over the next 5 years will be lower if you throw out the old hardware and replace it with newer more efficient models. With traditional servers that's typically after 3-5 years, with GPUs 2-3 years sounds about right
The major reason companies keep their old GPUs around much longer with now are the supply constraints
The used market is going to be absolutely flooded with millions of old cards. I imagine shipping being the most expensive cost for them. The supply side will be insane.
Think 100 cards but only 1 buyer as a ratio. Profit for ebay sellers will be on "handling", or inflated shipping costs.
I assume NVIDIA and co. already protects themselves in some way, either by the fact of these cards not being very useful after resale, or requiring them to go to the grinder after they expire.
In the late '90s, when CPUs were seeing the advances of GPUs are now seeing, there wasn't much of a market for two/three-year old CPUs. (According to a graph I had Gemini create, the Pentium had 100 MFLOPS and the Pentium 4 had 3000 MFLOPS.) I bought motherboards that supported upgrading, but never bothered, because what's the point of going from 400 MHz to 450 MHz, when the new ones are 600 or 800 MHz?
I don't think nVidia will have any problem there. If anything, hobbyists being able to use 2025 cards would increase their market by discovering new uses.
Cards don't "expire". There are alternate strategies to selling cards, but if they don't sell the cards, then there is no transfer of ownership, and therefore NVIDIA is entering some form of leasing model.
If NVIDIA is leasing, then you can't get use those cards as collateral. You can't also write off depreciation. Part of what we're discussing is that terms of credit are being extended too generously, with depreciation in the mix.
The could require some form of contractual arrangement, perhaps volume discounts for cards, if they agree to destroy them at a fixed time. That's very weird though, and I've never heard of such a thing for datacenter gear.
They may protect themselves on the driver side, but someone could still write OSS.
reply