"AFAIU the root DNS servers will refuse requests for recursive resolutions."
All the DNS data served by the "DNS root servers" is public and available for download via HTTP or FTP request.[FN1] As such, IMHO, there is rarely a reason to query those servers for the data they serve. I already have it.[FN1] For example, the IP address for the "a" com nameserver probably will not change in my lifetime. I am so sure of this that I have it memorised as 192.5.6.30.
A simple illustration of how to reduce DNS queries to one and reduce remote DNS queries to zero. Compile nsd as a root server using --enable-root-server. Add DNS data to a root.zone. Bind nsd to a localhost address, e.g., 127.8.8.8 and serve the zone. Configure the system resolver to query nameserver 127.8.8.8.
FN1. How to get the DNS data that the root servers provide, without using DNS:
x=$(printf '\r\n');
sed "s/$/$x/" << eof|openssl s_client -connect 192.0.32.9:43 -ign_eof
GET /domain/root.zone HTTP/1.1
Host: www.internic.net
User-Agent: www
Connection: close
eof
ftp://192.0.32.9/domain/root.zone
FN2. Except in emergencies where I have no access to locally stored root.zone DNS data. Also because I have the "a" root server IP address memorised as 198.41.0.4, I sometimes use it as a ping address.
I adore tagging systems and have worked on them in several different applications and implementations, but there are always pitfalls and trade offs, and it’s possible to bury yourself
Nowadays I nearly always store the assigned tags as an integer array column in Postgres, then use the intarray extension to handle the arbitrary boolean expression searches like “((1|2)&(3)&(!5))”. I still have a tags table that stores all the metadata / hierarchy / rules, but for performance I don’t use a join table. This has solved most of my problems. Supertags just expand to OR statements when I generate the expression. Performance has been excellent even with large tables thanks to pg indexing.
1. Forgiven many, is Netlify forgiving all obvious anomalies? Is the question, which if so but you said many so it is a no, it would make you reconsider the next point
2. Favoring keeping people site up ? Would you go as far as keeping them up if they stopped paying for the meter? If not you simply should not let that meter go overboard.
Hey I'm a taxi driver. Hailer fell asleep on the back, so I kept driving all night, once he woke up I dropped him to his place and asked for my monthly wage. I "forgive" many, but just a few are juicy income so I adopted the policy to never wake any customer up. If people ask I say it would be impolite, principles prime.
At this point people advocating the position you’re advocating for are in a state of denial (this is my opinion, not a matter of fact, obviously). Your assumption is that we can effectively prevent the majority of the nation from exposure via lockdown.
Not only does evidence seem to point against that, but when you do the math on mortality due to suicide and overdose it’s not clear that containment would even save more lives in the long run.
Here’s how you can tell people’s philosophical positions: if they talk about fear of a “second wave” they are Containers, since that implies the initial “wave” will not infect the majority; ie the virus is successfully contained (EDIT: See https://news.ycombinator.com/item?id=22961927 for the caveat here).
Ironically, leaders like Fauci are verbally saying that containment is not the strategy, yet every word that he says and the IMHE model everyone is relying on are all the result of a Containment ideology.
The alternative is what I would call Pareto mitigation. The vulnerable portion of the population self isolates, while the rest of us are _allowed_ to resume working and living more or less normally (still no large gatherings presumably).
I'd like to take this moment to put out a brief PSA that the serological data coming out, while not 100% reliable, is all telling more or less the same story. Let's look at these IFRs (the second link is CFRs but for Italian healthcare workers who presumably are all getting tested so I'm treating it as a de-facto IFR):
(I'm linking to the reddit comments instead of the actual study because they're really nice tables and the links are still there for anyone who wants to double check)
As others have said, for those around age 45 or less, Covid is equally or less dangerous than Influenza. And particular for those under 30 the flu is an order of magnitude more deadly at least.
In the general population overall, Covid is undeniably more deadly than the flu, but only about 3-5x (and I think 3x personally right now).
Recall that the flu is characterized by deaths in the very young and very old, while being less harmful to those "in between", purportedly due to the "cytokine storm" which is a scorched earth reaction of the immune system. Covid is very different, it is extraordinarily deadly to the very old, extraordinarily non-deadly to the very young, and about the same as the flu to those in between.
A disease with such a "spiky" (highly variable) mortality rate based on your risk factors is precisely the kind of disease that is most effectively treated with risk-informed self quarantine rather than a national lockdown.
Unemployment is correlated with a 2-3x higher chance of suicide, of which perhaps half can be explained away by mental health confounds [1]. There's unique factors in play here - rampant social isolation and widespead fearmongering, propagated even by health experts and "trusted" news sources at times - that lead me to believe that the spike in suicide and overdoses will actually be much higher than predicted by just unemployment alone.
We're currently at 50,000 suicides per year in the US as a base rate, it is not unimaginable that we would see at least 50,000 _extra_ suicide deaths attributable to a mixture of lockdown and the general socioemotional environment.
--
I haven't even gotten to the philosophical battle of "freedom versus security". I am, ideologically, someone who drank the koolaid and really believes in freedom and civil liberties over "security" (which I view as illusory anyway), but _even just viewed through the lens of reducing mortality_, the evidence is stacking up that lockdown is going to do more harm than good.
Is the evidence fully settled? Of course not. But it's shocking to me how many people seem to be operating off of the projected CFR's we had in early February, shouting from the rooftops about "1 in 20" people dying (random recent case in point: https://news.ycombinator.com/reply?id=22952764&goto=threads%...). I don't know whether it's just that a large swath of the population already had clinical anxiety which is further magnified by social isolation and social media and news headlines, or whether something else is at play, but I'm very concerned about the state of discourse in the United States right now, and more broadly, the entire world. In fact, ironically I feel a bit luckier to be in the US than some of these other countries because in the US _every_ issue is partisan, which while entirely irrational means that roughly half the country will be in favor of ending the lockdown at any given time (the position I am advocating for, within reason, insofar as hospitals are not overwhelmed), as opposed to other places where you can get given a $1600 ticket for driving a car by yourself, based off of a superstition that _being outside_ causes Covid as opposed to exposure to infected respiratory droplets...
--
EDIT: Lastly I should mention that in a perfect world we could have voluntary variolation; I would love to be able to expose myself to a controlled dose of SARS-CoV-2 and self isolate for several weeks to ensure that I can never pass on Covid to someone else. Unfortunately that would be very hard to make a reality due to the political environment, even though I am advocating for it to be totally voluntary. I was heartened to see this recent paper toying with a variant of that approach: https://www.medrxiv.org/content/10.1101/2020.04.12.20062687v... (I don't agree with an "Immunity Card" for ideological reasons but I'm glad we have a paper attempting to model it out which does show benefit of voluntary self exposure)
I'm a little concerned by the tone of this article. It keeps talking about "punishment" by a massive corporation, which gives me dystopian chills.
I agree with the argument that Youtube does not want to monetize his content (which I've admittedly never seen). But their power should not extend beyond their website.
The author sounds upset that they can't destroy this person's own business outside of Youtube. The most that Youtube can do should be demonetizing or banning in extreme cases. Google (or any other entity) should not be the police of the entire internet.
Just a note on this: Prototypes can be enough to get a company going. Engineers/Makers think that the product is the most important part of starting a company. It's not. Selling your vision to investors and customers is. Once you have a business plan that's funded, you can refine your product by hiring consultants that will help you with production. Once you have a product then you can get customers to refine the product further.
Why is bottled water a billion dollar business in the U.S. when the U.S. has the safest drinking water in the world? I am very sure it's not the product.
If you can't sell, define your vision and prototype and find someone to sell them for you. Salespeople are viewed as BS'ers by engineers but without them, companies can never succeed.
Finally! A cryptocurrency with the ethics of Uber, the censorship resistance of Paypal, and the centralization of Visa, all tied together under the proven privacy of Facebook. Just what I’ve been waiting for.
There’s also more straightforward reasons to want a microwave connection from NYC to Chicago. Say you want to trade off news signals: you want to relay the headlines (from Bloomberg, DJ, etc.) to your colo sites as quickly as possible.
Not really! Lets say, user 1 makes an API call to our server to fetch browse node: 1000 (Books). The first time such request is made, our server will make an API call to Amazon using user 1's AWS key. Once we get the response back, we will cache the result and send back the result to user 1. When user 2 makes the request for the same browse node 1000 (Books), we will not hit Amazon server again. Instead, we will get the cached data from our database and send it back to the user 2 without using his/her AWS key. This way the number of API calls made will be less than one per second for a user.
Is it safe to backup a database by just straight copying its files? Wouldn't it be better to use a specific backup tool, e.g. pg_dump for Postgres or mongodump for Mongo?
(Hopefully you've tested recovery procedures, because as the saying goes, "your backups are only as good as your last restore")
$ echo -n 'The SHA256 for this sentence begins with: one, eight, two, a, seven, c and nine.' | sha256sum
182a7c930b0e5227ff8d24b5f4500ff2fa3ee1a57bd35e52d98c6e24c2749ae0
Curious, does AWS Glue or SchemaCrawler do any type inference past basic data types? Such as string analysis to automatically mark fields as a shipping tracking number, IP address, ISO country/city, correct date format, etc?
> As soft prerequisites, we assume basic comfortability with linear algebra/matrix calc [...]
>
That's a bit of an understatement. I think anyone interested in learning ML should invest the time needed to deeply understand Linear Algebra: vectors, linear transformations, representations, vector spaces, matrix methods, etc. Linear algebra knowledge and intuition is key to all things ML, probably even more important than calculus.
This is a huge concern for me at my current organization. Dev has decided to put all data into mongoDB. Yet all decisions are based on that data and the tools we have do not allow for seamless flow (ETL) from mongoDB. That data is important for deriving decisions that affect revenue and costs. Where are solutions for the data analysts and scientists? Frankly I'm pretty sick of hearing it can just be automated.
In my mind there has to be a decent "business intelligence stack". I'm not sure I'm coining that because I didn't get good search results from that phrase. Believe me I've been trying to find solutions. I believe there is big opportunity in building out this sort of stack that bridges data management and data analysis. Sure you can call IBM, Microsoft, Dell, HP but be prepared for big costs and huge software bloat. I would like simplified solutions and options that can fit with most industry standard tools.
I'm also willing to work with anyone on this as well.
$ dig -p 5353 raspberrypi.local @224.0.0.251
; <<>> DiG 9.8.3-P1 <<>> -p 5353 raspberrypi.local @224.0.0.251
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21640
;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;raspberrypi.local. IN A
;; ANSWER SECTION:
raspberrypi.local. 10 IN A 192.168.1.175
A DNS query packet is to the multicast address 224.0.0.251 and UDP port 5353. All devices on your network that are listening on that address will see the query (e.g. Linux systems with avahi-daemon or Macs running mDNSResponder). The device whose name is raspberryi will respond. RFC6762 should have all the details.
I built a fully automated self-service Instagram publishing system, no private API hacks or such, but instead, by completely automating lots of Android phones (OCR, pattern recognition, etc.). Mainly offered this to small businesses and friends. Took off due to Instagram’s popularity and made six figures after a while. Now contemplating to shut down, as Instagram has opened up publishing access to hand-picked partners.
A second business focused solely on Twitter analytics. Was about to shut that down years ago when Twitter suddenly offered free analytics, but it barely changed sign-up rates, so I kept it going.
On good days, I answer one or two emails, that’s it. The occasional one-week-of-hell when things go sideways
mixed in every few months, of course.
Buying tokens, which can be exchanged for a service at an unknown ratio (price), amounts to financing a new fast food chain by buying a million tokens for $1,000, where the tokens can be used to purchase hamburgers once the chain opens. The problem is that you have no idea if those million tokens — once the chain is up and running — will buy a thousand hamburgers or three.
So there’s a dichotomy here. Either your tokens remain low in value, and the end product of the business will be cheap (as it’s priced in these tokens) and you make no money as an investor, or the token price will increase and the end product will be expensive (but you’ll be able to sell your tokens for a profit).
In the end it’s a useless proposition: either token holders will be able to buy cheap hamburgers if the token price is low (thus causing a loss to the issuer/hamburger producer) or they will sell their expensive tokens for dollars and use dollars to buy their hamburgers elsewhere.
Different social networking applications imply different social contexts. I may follow you back on Twitter, but that doesn't mean that I want to be your friend on Facebook.
In German there's a term called Eierlegende wollmilchsau. A wool pig that gives milk and lays eggs.
A reason that term exists is because there's a tendency for Germans to wait to commit to, or outright dismiss choices because they do not meet ALL of a list of requirements.
I bring it up because I do not believe that it is possible to have an identity and anonymity concurrently while the service achieves all of its goals.
Either the service caters to anonymity, or it caters to identity. Trying to sit on the fence too long results in more specialized services dividing the customer base.
All the DNS data served by the "DNS root servers" is public and available for download via HTTP or FTP request.[FN1] As such, IMHO, there is rarely a reason to query those servers for the data they serve. I already have it.[FN1] For example, the IP address for the "a" com nameserver probably will not change in my lifetime. I am so sure of this that I have it memorised as 192.5.6.30.
A simple illustration of how to reduce DNS queries to one and reduce remote DNS queries to zero. Compile nsd as a root server using --enable-root-server. Add DNS data to a root.zone. Bind nsd to a localhost address, e.g., 127.8.8.8 and serve the zone. Configure the system resolver to query nameserver 127.8.8.8.
FN1. How to get the DNS data that the root servers provide, without using DNS: ftp://192.0.32.9/domain/root.zoneFN2. Except in emergencies where I have no access to locally stored root.zone DNS data. Also because I have the "a" root server IP address memorised as 198.41.0.4, I sometimes use it as a ping address.