Hacker News new | past | comments | ask | show | jobs | submit | LiveTheDream's comments login

For me, this has been perplexity.ai. Give it a query, it expands that into multiple queries (possibly chained depending on results) and synthesizes the results into an answer with citations.


Here is software engineer levels.fyi data for Walmart, Walmart Gobal Tech, and Amazon: https://www.levels.fyi/t/software-engineer?compare=Walmart+G...


And yet he will still likely demand I personally post this very same link, because for some reason that's important...


Some other options:

1. Piping into `fzf -m` (use tab to toggle selections and built-in search to filter options).

2. Percol https://github.com/mooz/percol (also has filtering, use ctrl-space to toggle selection).


I always find myself needing a tool to interactively let me select which columns of an output to print. Basically I'm interactively writing something like 'awk {print $5, $10}' or whatever but I'm not sure which columns I need.

Anyone know anything like that? Ideally after running it'd somehow output which columns they were to use later.


Ultimate plumber can do this.

https://github.com/akavel/up



I wondered how this could reliably distinguish between a scene cut and a cut to commercial without content hashes and/or program schedules being shared through the network, then realized that 20 years ago was already 2003 and of course home internet was common by then.

Apparently one offline technique was checking for black frames inserted by local stations.

Some more information here: https://en.m.wikipedia.org/wiki/ReplayTV


A person today could build something to be used by a person tomorrow.


I subscribe to this theory as well. Another earlier book about the topic is I Am A Strangle Loop by Douglas Hofstadter[0]

[0] https://www.goodreads.com/book/show/123471.I_Am_a_Strange_Lo...


The typo (Strange -> Strangle) gives this comment a very surreal vibe.


You can request your data here https://www.goodreads.com/user/edit?ref=nav_profile_settings

They will send a confirmation email. After you click the link to confirm, it says "We will provide your information to you as soon as we can. Usually, this should take no more than a month."

EDIT: you can export your book database as CSV (shelves, ratings, reviews, etc) here https://www.goodreads.com/review/import (there's a link towards the top to export...disregard that the URL says "import".


For those who need to hear it, Bookwyrm is a good alternative to Goodreads and you can import your csv!


Can it reasonably take a month to export the data from Goodreads? I barely use it but from what I recall its basically lists of will-read and have-read right?

Is that delay strictly a "cooling off" sort of tactic?


I assume this data export includes much more than just the book data (which I just discovered you can get immediately as CVS via https://www.goodreads.com/review/import) which might be part of a compliance requirement.


Update: I got an automated email from Goodreads to download this export. It contains a bunch of json files containing user/usage-related data for my account like request logs, newsfeed updates, kindle logins, site settings, etdc.


This reminds me of a great short story about the intelligence of parrots, "The Great Silence" by Ted Chiang.

Full text available here: https://electricliterature.com/the-great-silence-by-ted-chia...


For the Benefit of Mankind, by Cixin Liu?


No, much older than that and a short story I think. Probably either Asimov or Clarke.


The 5-place date is meant to draw attention to a longer view of time, a position advocated for by the Long Now Foundation.[0]

[0] https://blog.longnow.org/02013/12/31/long-now-years-five-dig...


Thanks, interesting. I will of course now 1-up and use a double 00 prefix on any publicly posted dates.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: