Hacker News new | past | comments | ask | show | jobs | submit | _Nat_'s comments login

How could someone find a recruiter like the author?

I mean, I'm just starting a job-search and the top HackerNews story's about how prospective employers ought to be looking for applications like mine! I'd love to talk to such prospective employers; I wonder how to make that connection?


So you'd see applicants like those that the article describes as being very desirable?


Sure, much like magic and unicorns are desirable. The kind of human they are looking for doesn't actually exist.


It's funny because I was just starting my job-search today and, at the top of HackerNews, there was this article that seems to suggest that prospective employers look for applicants like me.

I wonder what sort of prospective employers might be looking for the sorts of things that this article describes?


It's difficult for me to appreciate how works in such fields might be "intellectual".

I feel like they're doing LLM-like opinion-pieces and acting as though the results are deeply meaningful. But the mechanics and results seem shallow and uninteresting.

In fields like Physics and Engineering, there're plenty of cranks who say crazy things. But in such fields, reality tears those people down -- their works fail; their perpetual-motion machines don't tend to generate infinite-energy; their snake-oil doesn't seem to open people's latent-psychic abilities; their mathematical theorems fall apart. Reality is a harsh blade that cuts them down without mercy. And people in those fields learn to be harsh/critical themselves, as to survive the constant assaults from reality's judgement.

But the softer fields lack such harshness -- they're comically tolerant. It's like they're all whimsy; there'd seem to be little incentive for an academic to even bother with the extreme costs associated with rationality, as they'd just get out-competed on the metrics that they're actually judged by.

I mean, I don't care to see what (the early versions of) ChatGPT have to say of math; while chatbots might spit out a lot of junk, their rantings would be shallow and disinteresting. Why ought we have any more regard for the same mindlessness in other fields?

Point being that it seems off-topic to discuss such matters in terms of intellectualism -- unless we're using "intellectual" so loosely as to include stuff like ChatGPT-generated content.


> But the softer fields lack such harshness -- they're comically tolerant.

Anecdotally, I think the field of psychology is in getting the public ire recently because of this. I read more and more complaints that certain highly touted (published?) therapies are failing and destroying individual lives. The other sub-field I see it is the societal impact of, again, highly touted & published psychology hurting entire classes of people.

The more bombastic the claim, the more likely it becomes de facto "reality". As you describe it, there is no simple & quick way to rationally disprove, and why would they - it is fashionable.


It's basically fancy SEO spam for job security. Not that they get any until tenured.


Seems inevitable enough that we may have to accept it and try to work within the context of (what we'd tend to think of today as) mass-spying.

I mean, even if we pass laws to offer more protections, as computation gets cheaper, it ought to become easier-and-easier for anyone to start a mass-spying operation -- even by just buying a bunch of cheap sensors and doing all of the work on their personal-computer.

A decent near-term goal might be figuring out what sorts of information we can't reasonably expect privacy on (because someone's going to get it) and then ensuring that access to such data is generally available. Because if the privacy's going to be lost anyway, then may as well try to address the next concern, i.e. disparities in data-access dividing society.


> we may have to accept it and try to work within the context of (what we'd tend to think of today as) mass-spying.

We do have to live in the nightmare world we're building (and as an industry, we have to live with ourselves for helping to build it), but we don't have to accept it at all. It's worth fighting all this tooth and nail.


even by just buying a bunch of cheap sensors and doing all of the work on your personal-computer.

The cynical response: you won't be able to do that, because buying that equipment will set off red flags. Only existing users -- corporations and governments -- will be allowed to play.


Living off-grid is how I'm dealing with the whole situation nowadays.


How's it working out for you? I have similar plans and have most of the big pieces budgeted / ideated. But realistically I'm still 1 to 2 years out.


Except for the posting on HN?


Title looks like misinformation. Sub-title says something different (and more plausible).

Title: "Meta will enforce ban on AI-powered political ads in every nation, no exceptions".

Sub-title: "With several nations expected to hold elections next year, Meta confirms its generative AI advertising tools cannot be used for campaigns targeting specific services and issues.".

The idea of preventing advertisers from using AI at all ("no exceptions") seems fairly absurd -- for example, if an advertiser asks ChatGPT to spell-check something a human wrote, how would Meta know that ChatGPT did the spell-checking? But if Meta's just trying to manage how people access its own tools, then that'd seem like a different scenario.

Presumably they mean that their tools couldn't be used directly, rather than not at all, though. For example, if Alice uses one of Meta's tools for some other declared purpose to spell-check a word that Alice'll then include in an ad that she'll ask Bob to deploy on Meta, then how would they detect such indirect usage? Though they might still try to detect their own tools' signatures on, say, images or longer bodies of text.


What makes some folks so attached to particular browsers?

Personally, I'm using a mix of Edge, Firefox, and Chrome. While in theory I could just use one of them (which would probably be Firefox), it's kinda like having 3 different super-profiles for a web-browser (where each browser is its own super-profile), so I'm mostly just using all 3 out of laziness.

I don't particularly trust Chrome nor Edge, so I just don't use them for anything important. Not that I'm 100% confident in Firefox, but if I've got to do something important, Firefox is the easy pick. Then I guess I end up favoring Chrome or Edge for everything else, since I don't want to junk up Firefox with nonsense (so Firefox'll remain solid for when it's appropriate). Between Chrome and Edge, I guess I favor Chrome for junk-level tasks since Chrome feels the most separated (being neither used for important stuff like Firefox nor being tied to the OS like Edge).

I get that some folks might have a business-critical app with compatibility-issues limiting their freedom-of-choice when it comes to certain tasks, but outside of such niche cases, what's the big deal?


I'm not sure if I'm agreeing or disagreeing or just providing context, but for me personally it's not about being attached to a particular browser, rather it's about being repulsed by most of them. Edge is user-hostile spyware, Chrome is approximately the same but Google flavored and very very marginally better, I don't trust most Firefox forks to stay on top of security issues, and I don't use a Mac. The result is that my only options really are Firefox, or maybe some of the lesser WebKit browsers. And even then... I'm writing this comment in Firefox, but I don't even particularly like it, it just sucks less than the alternatives.


When I was doing webdevelopment I was attached to chrome tools for that.


The IPCC's reports might be a good starting-point.

["Climate Change 2022: Mitigation of Climate Change"](https://www.ipcc.ch/report/sixth-assessment-report-working-g... ) appears to be the most recent specifically on mitigating climate-change.

From [this PDF's page-117](https://www.ipcc.ch/report/ar6/wg3/downloads/report/IPCC_AR6... ):

> Net zero CO2 industrial-sector emissions are possible but challenging (high confidence). Energy efficiency will continue to be important. Reduced materials demand, material efficiency, and circular economy solutions can reduce the need for primary production. Primary production options include switching to new processes that use low-to-zero GHG energy carriers and feedstocks (e.g., electricity, hydrogen, biofuels, and carbon dioxide capture and utilisation (CCU) to provide carbon feedstocks). Carbon capture and storage (CCS) will be required to mitigate remaining CO2 emissions {11.3}. These options require substantial scaling up of electricity, hydrogen, recycling, CO2, and other infrastructure, as well as phase-out or conversion of existing industrial plants. While improvements in the GHG intensities of major basic materials have nearly stagnated over the last 30 years, analysis of historical technology shifts and newly available technologies indicate these intensities can be significantly reduced by mid-century. {11.2, 11.3, 11.4}


Cool thanks!

From the relevant section, 11.3.5:

> Biofuel use may also be critical for producing negative emissions when combined with carbon capture and storage (i.e., bioenergy with carbon capture and storage – BECCS). Most production routes for biofuels, biochemicals and biogas generate large side streams of concentrated CO2 which is easily captured, and which could become a source of negative emissions (Sanchez et al. 2018) (Section 11.3.6).

When I hear "carbon capture" I think "direct air capture (DAC)" since that's kind of a novel concept, while point-source carbon capture seems to be more of an extension of circular-economy-style reasoning. I am skeptical of the former (i.e. massive fans with catalysts and filters) being useful to any significant degree because of the energy requirements. I think DAC advocates hide under the catch-all phrase "carbon capture" to avoid scrutiny.

DAC seems worthless compared to just growing more and more effective plants to do a similar job. But point-source capture of concentrated CO2 seems like an obvious step in the right direction. Even better if it can help to replace traditional fossil fuels in legacy ICE vehicles.

So in the context of TFA it seems that the pipeline was at least not a bad idea, though we haven't seen an audit on when all that construction and operation would be carbon-neutral after everything's said and done. That timeline could be longer than the service life of the pipeline which would make it worse than pointless.


> From what I know, carbon capture over a field is not exactly a solved problem.

Doesn't appear to be capture-capture over a field. Instead:

> Navigator’s project would have laid pipelines across five US states—South Dakota, Nebraska, Minnesota, Iowa, and Illinois—to collect CO₂ from ethanol and fertilizer plants and pipe the gas to an underground storage site in Illinois.

Sounds like they were planning on point-source capture, which is generally a good bit more efficient than open-air capture.

If it were carbon-capture over a field, then it'd probably be under the category of "open-air capture" -- which is technically easy to do (as capture in general is; we've had CO2-capture technology since the 1930's), just more costly (since it's less thermodynamically efficient to capture from a low-concentration source like the atmosphere, relative to capture from a high-concentration source like the flue-gas from a plant).

Capturing CO2 from point-sources (like the flue-gas from plants) tends to be relatively efficient, which seems to work out better both economically and environmentally.


> (Is there some kind of universal law that you're citing, that claims that it's impossible to build a useful, unbiased system?)

Usually it'd be a trade-off, where you'd give up some quality at doing the main job to gain whatever the secondary objective would be, e.g. producing results according to whatever desired distribution.

That said, the bigger issue would be defining what "unbiased" means. There're some mutually-exclusive notions of what it'd take to be unbiased, so if "unbiased" requires not being biased according to any perspective, then, yeah, it wouldn't generally be possible.

For a quick example, say that we want to be unbiased in our representation of men-vs.-women in a profession. Then, is the proper ratio: (1) 50% men and 50% women; (2) population-weighted ratios (because men and women aren't generally equally represented in the population); (3) profession-weighted ratios (because men and women aren't generally equally represented in a profession); (4) observer-subjective ratio (this is, a realistic ratio for how often a viewer might actually see men-vs.-women of that profession); (5) something else? What about intersexed people -- should they be represented, and if so, at what ratio, and how should men/women be adjusted to account for them?

Then, what if the photo would be served by having a person of a certain height, to best fill the space while not covering up more -- then, should the algorithm be biased toward a man or a woman based on typical heights, to accurately represent sex-specific height-distributions? Or, should it deviate from that, and present men and women as having the same height-distribution? And if we do deviate from that to pretend that men and women have the same height-distribution, but that height-distribution more closely matches the actual height-distribution for one sex than the other, then is that itself a form of bias?

Then, same as above, but for weight. Then for body size.

Then, what if men and women have different fashions in the relevant culture, and one fashion would better fit the scene -- can that be a factor? And if so, what should be the logic for picking the relevant culture?

Then, how should interactions be handled? For example, if we're generating pictures of doctors who're also mothers, then should men be included in that, or is it okay to only have female doctors in that context? Or what if we're generating pictures of doctors at a conference for female doctors in specific but has some male attendees -- then what ratio would be correct?

Point being that, for someone designing algorithms that would be "unbiased", they'd presumably want to know what folks would accept as a valid solution to that objective. Without a clear definition of what's desired, it'd seem like any particular solution would be open to criticism from other perspectives.


> Wouldn’t something like isMale*P(male=.66) work fine?

It doesn't think like that.

If it did, they could've just done `P(hasFiveFingersPerHand)=0.99999`.

But it doesn't even necessarily draw what you ask it to. Instead, it generally adopts a set of de-noising transforms that it's been trained to believe would tend to lead to what the prompt sounds like.. then whatever those transforms produce would, hopefully, be sorta like what was requested.


Custom loss functions absolutely work and work basically the way described above.

https://colab.research.google.com/drive/1dlgggNa5Mz8sEAGU0wF...

You can see them define a custom color loss and apply it simultaneously with the regular diffusion loss. I've actually expanded this notebook to allow regional specification of the custom loss.

It's quite difficult to define a function that detects if an individual has 5 fingers or not. That's the real issue.


The comment I'd responded to seemed to have thought that StableDiffusion picked what the sex of a person would be according to some internal odds that could be modified.

My point was that it doesn't actually think like that. For example, prompting StableDiffusion for a picture of a doctor doesn't necessarily get it to draw a human at all, much less a doctor of a pre-determined sex; instead, StableDiffusion de-noises the image until the result emerges, where that result would (ideally) contain a doctor of whatever sex it happened to come up with.

That said, you're right that we can add more code to try to guide things.

We could even just brute-force it by just re-generating images over-and-over, or tweaking them after generation, until they match exactly what we wanted. (Realistically, something like branch-and-bound would probably be preferred to blindly guess-and-check-ing.)


My point was more that you can add these guardrails without having to keep track of what the model had previously generated.

And I think if you used a perfectly balanced dataset for training, you’d get these guardrails for free because the right probabilities would be baked into the model’s weights.


Yeah, the idea to use random-selection instead of keeping track of generation-history seems reasonable. The idea of guardrails from perfect-balancing seems less obvious to me.

For example, say someone wants to generate a "US President" -- what would the ideal range of outputs be?

The article checked for just two things: sex (male or female) and skin-tone (I, II, III, IV, V, or VI). To date, all US Presidents have been male, and they were probably mostly skin-tones I or II (not bothering to check), except for Obama who was probably.. like IV or something (still not bothering to check).

So if we run StableDiffusion for a "US President", what would a "perfectly balanced" output look like? Should there be any women? What about the skin-tone distribution?

Also, Obama was a 2-term President, so.. if his skin-tone should somehow affect the distribution, should it have a stronger effect because he was in office for longer than average? Or should all US Presidents have the same effect regardless of their time in office? And either way, why?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: