Hacker News new | past | comments | ask | show | jobs | submit | vmaurin's comments login

I claim that by the end of the year, all VC jobs will be replaced by AI. But I don't know why, my claim is not taken seriously or not very popular !

Myth BUSTED - AI cant do coke or ketamine with founders and has no value in your use case

Marc Andreessen would beg to differ :)

On a recent a16z podcast, Andreessen said:

“It's possible that [being a VC] is quite literally timeless, and when the AIs are doing everything else, that may be one of the last remaining fields that people are still doing."

Interestingly, his justification is not that VCs are measurably good at what they do, but rather that they appear to be so bad at what they do:

“Every great venture capitalist in the last 70 years has missed most of the great companies of his generation... if it was a science, you could eventually dial it in and have somebody who gets 8 out of 10 [right]. There's an intangibility to it, there's a taste aspect, the human relationship aspect, the psychology — by the way a lot of it is psychological analysis."

The podcast in question: https://youtu.be/qpBDB2NjaWY

(Personally, I’m not quite sure he actually believes this - but watching him is a certain kind of masterclass in using spicy takes to generate publicity / awareness / buzz. And by talking about him I’m participating in his clever scheme.)


Even his justification for why AI can't become a VC sounds like you could just go by random chance and have the same chance at success which means even the personal touch he is trying to advocate is useless. A monkey could do his job.

That sounds more like him trying to justify all of the possible harms to society as a whole to his peers.

"It'll screw everyone else, but we'll be okay, so..."


Ignore celebrities.

The way I’d read that take is that being a “good” VC is about having enough money to spread around and enough networking connections to generate the right leads. After that pretty much any idiot can do the job.

Tldr AI can replace labor but not capital. More news at 11.


Well it is capital. It's just different capital.

It will probably never work. Companies have spend probably the last decade(s?) closing everything on Internet: * no more RSS feed * paywall * the need to have an "app" to access a service * killing open protocols

And all of the sudden, everyone will expose their data through simple API calls ?


Indeed I think a lot of companies will hate the idea of losing their analytics and app mediated control over their users.

I see it working in a B2B context where customers demand that their knowledge management systems (ticketing, docs, etc...) have an MCP interface.


You either do it, or someone wraps you with one using browser_use or similar.

I think it is a huge opportunity to dethrone established players that don't want to open up that way. Users will want it, and whoever gives it to them, wins.

I do, a Oneplus 6, PMOS "edge" with OpenRC + Phosh. Everything is fine, except I still need to reboot the phone after each call to be sure to have the audio working


How is 4G calling (VoLTE) these days? Last I heard it needed quite a bit of a manual work to get it going.


Sailfish(OS) supports VoLTE in newer, supported devices. For community ports and other mobile Linux distros it's afaik still rare. Closed drivers and obtaining configurations for carriers in other countries are the 2 big showstoppers.


Yes they are!

I think I know the reason: OAuth2 naming. In the OAuth rfc https://www.rfc-editor.org/rfc/rfc6749 they named one of the role "client", but than meant to represent a server in the more standard flow, while they named the browser "user-agent". Then people understood that "client" is the browser, so they went nuts storing access token and refresh token in the browser local storage, instead of being store in the server session storage, accessible with a session cookie.

So what people should have done https://www.rfc-editor.org/rfc/rfc6749#section-4.1 (you can see here the tokens never reach the user agent, so the server can keep them in a session, then have the user agent identified by a cookie. And what most of the people did https://www.rfc-editor.org/rfc/rfc6749#section-4.2


This is a very useful piece of information.

According to the saying "There are two hard problems in Computer Science: naming things and cache invalidation"...

OAuth 2.0 editors failed to properly name things.


But there are many times when the client _is_ the browser. Are you sure you're not confused by that?


In the RFC, the browser is named "user-agent". And in OAuth2 flow, the browser is acting as client only on the implicit flow. Also the intent of the authors for the implicit flow is that the "client" is a mobile/desktop applications, and not especially something running in a browser


Yes, but I think there are plenty of OAuth2 libraries/clients implemented in ts/js to be used directly from a web application. A JavaScript client running in a web-page still presents itself to the OAuth2 server as the regular "User-Agent" that's used for the web/HTML parts of the interaction unless the requests being done are enhanced with a custom header.

For these clients saving the tokens in the local browser storage is the more elegant option in my opinion, to saving them in a cookie and thus polluting the rest of the browser's requests to that same host.


I worked 12y the ad-tech industry, and 3y in a company using this kind of data to measure performance of "drive to store" campaigns: doing online campaign, then seeing if people visit the actual real store based on geo data. The company was actually controlled by the CNIL (French regulator) according GDPR, so we were "anonymizing" data, meaning hashing one way the IFA (unique phone id for advertiser) and storing location within a 300mx300m square I put some quote around anonymizing because geo data from your phone in the evening/night is enough to know where you live (with 300m precision). The rest of the industry in France and Europe was still a far west though (around 2020)


I can’t imagine that 300m precision is all that useful for measuring store visit campaigns.


Or could be a browser preference that then send an HTTP header ? Wait https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/DNT


The problem with DNT was that there was no established legal basis governing its meaning and some browsers just sent it by default so corporations started arguing it's meaningless because there's no way to tell if it indicates a genuine request or is merely an artefact of the user's browser choice (which may be meaningless as well if they didn't get to choose their browser).

As the English version of that page says, it's been superceded by GPC which has more widespread industry support and is trying to get legal adoption though I'm seeing conflicting statements about whether it has any legal meaning at the moment, especially outside the US - the described effects in the EU seem redundant given what the GDPR and ePrivacy directive establish as the default behavior: https://privacycg.github.io/gpc-spec/explainer


Plot twist: in the article drawings, replica one and two are split by network, and it could fail.

The author seems to not understand what the meaning of the P in CAP


I missed your comment and just made the same comment with different words. He literally has a picture of a partition.


"Non failing node" does not mean "Non partitioned node", simple as that.

If you treat a partitioned node as "failed", then CAP does not apply. You've simply left it out cold with no read / write capability because you've designated a "quorum" as the in-group.


I have been doing this kind of stuff both in ad tech and trust & safety industry, mainly to handle scalability. Something that looks like "Event-carried state transfer" here https://martinfowler.com/articles/201701-event-driven.html

These system are working fine, but maybe a common ground : * very few services * the main throughput is "fact" events (so something that did happen) * what you get as "Event carried state transfer" is basically the configuration. One service own it, with a classical DB and a UI, but then expose configuration to all the system with this kind of event (and all the consumers consume these read only) * usually you have to deal with eventual consistency a lot in this kind of setup (so it scales well, but there is a tradeoff)


> why game devs don't unit test

Sources ?


The issue with Spring ecosystem is that people use it without knowing why or which problem it solves but because almost everyone is using it. And most of the time, they don't need Spring (maybe a company like Netflix did, but it didn't prove to be the right choice at the end)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: