Hacker News new | past | comments | ask | show | jobs | submit | more tehsauce's comments login

Not sure if he still is, but definitely was. Many of his videos are filmed in the Amazon NYC office.


He left Amazon a few months ago to do a job where he could spend more time focusing on Krazam! Check out the Patreon! They are both getting super serious and are hopefully going to put out more great content soon!

https://www.patreon.com/KRAZAM


I'm surprised he got the go-ahead from Amazon to film there


Haha. You're also looking to make the leap from engineering to comedy?


You made me laugh. I think I love you.


If you have written something like this somewhere, or know someone that has, please share!


ASI is nothing like a corporation


No, they're not. Corporations have known, concrete impacts on the world, whereas the dangers of AI are, so far, corporations. ASIs are (as yet) fictional.

Another difference: most corporations will avoid doing illegal stuff if the penalties are large enough: the corporation alignment problem is political. Pretty much no extant AI systems can be instructed in this way: we don't know how to align AIs even in theory.


For organisms the ultimate punishment is death. How do you delete an AI from the internet?


sudo rm * -rf


That won't provide any motivation: no AI system yet created fears death (except perhaps some of the really simple, evolved ones – but I'd question whether they're sophisticated enough to fear).


> Corporations have known, concrete impacts on the world

I hate to do this, but can you enumerate them?


Is very much like a corporation; a corp is effectively an AGI, just running very slowly - at the speed of bureaucracy.


Another awesome project! Note that as of this moment the CUDA part is aspirational. There is no gpu code in the repo yet.


At least one of their runtime implementations I checked out was implemented on top of Skia. Looks like they support a number of possible backends!


I was curious how well gpt-4 could de-noise the text. With the prompt:

uncover the text from the message which as had noise introduced: In Gramm@r, an @R+icle_Is.anY memBer I6.@ claSS oF_dedicateb_WOnbs that arE used m|th_No0n phR@ses tO MarK_Th4 id4ntifiabilitY of +Je ne64Rents of THe noun qhnases. Th4 c@tegoRy of articles constitutES a panT oF sPEecJ:

It was able to perfectly recover the original text.


That's pretty incredible! I did the same with a paragraph from another article I was starting to read on HN. (here: https://news.ycombinator.com/item?id=39740654) I put it into this and cranked the slidey thing all the way up then gave this to GPT4:

uncover the text from the message which as had noise introduced: whEn_O_mIYeD uq2taTe.@t_tJ4 9`oght.of +Je pandeM|c?_I fInA1ly.reCykl4d @ dAMker?S_BOX OF useR.90|dEs fIr_tJ|n9s.I_No_lIr9er_OwNEd: a toaSt4R:Iven, A_Palw_+iLI+,_a.vxB. th@t_B0C_hab moz3b wit9_Me_From CHEls4U:+o_We2+_+hI1ly_To PaRk_51Op4, ther:Ir to~6rAmerCy_ANd Crown heoGhtS and pet48:XooPEn VIll4Ge, evEN thoUGh_o jUdN?+_4bd4d or r4tnIeVEb @nythirH:from +JE~CollEction.in yeanS _7HE_OlDEs+.MUr0als w4rE_FrIM tJA.mid=90s, wher I LA6T_M?.pAnenPZ^.JImE aNb 2tarTed.B@IldIng a M4naGerie_O6 applianC42.aNd.9aDgEt2 FoR.mJich_o.mas so1ely rEsponsIB1e: Instr0Ct|Ins Wer3M?T online.PJEN,_So.it:M@2.T9nO09H.+hesE CheAPly qRinteb book1Ets tHat O_unberstoID Th4.op4r@P|nG_lOg|cs.i6.m?.devicES _Sowe 9uIdeS_HaD.2ectIOns.Ir.ClEAnINg, mainTENAnce, troub1esHOotINg" arD req@i8?.Oth4R2.JUd diaGRam2 +J@+ mIU1d_HelP m3 In6A8.jiM.TI.f|$_4MvTHINg +J@+.drOk4._3zEry mAMUA1 dEPOsiT47.|r tJ@+.dI$_Was A_sign I6.my INT4r+|ir.+o_c@r4.F0n~TJe_TJiNG@maJIG_it came with

and while the resulting text wasn't perfect, it did get 90% of it and it stayed true to form.

"When I moved upstairs at the height of the pandemic, I finally recycled a banker's box of user guides for things I no longer owned: a toaster oven, a Palm Pilot, a VCR. That box had moved with me from Chelsea to West Philly to Park Slope, then to Gramercy and Crown Heights, and even though I hadn't added or retrieved anything from the collection in years, the oldest manuals were from the mid-90s, when I last moved. I started building a menagerie of appliances and gadgets for which I was solely responsible. Instructions weren't online then, so it was through these cheaply printed booklets that I understood the operating logics of my devices. Some guides had sections on cleaning, maintenance, troubleshooting, and repairs. Others just diagrams that could help me navigate fixing anything that broke. Every manual deposited in that discard was a sign of my interaction with caring for these thingamajigs it came with."

I'm kind of shocked honestly. I can make out some of it but a lot of it is too jarbled but it's interesting that it manages to pull the text back out, even thingamajigs!


Still no GPU support. They are nowhere near a MVP.


Have you considered using jax? you can efficiently compute the jacobian (even using the gpu if you want) without leaving python at all. The api is also numpy compatible!


We target very low-level & specific hardware. I don't think it would be easy to deploy Jax on it. But it's an interesting idea, maybe for robots running linux !


It's fascinating that it can model so much of the subtle dynamics, structure, and appearance of the world in photorealistic detail, and still have a relatively poor model of things like object permanence:

https://cdn.openai.com/sora/videos/puppy-cloning.mp4

Perhaps there are particular aspects of our world that the human mind has evolved to hyperfocus on.

Will we figure out an easy way make these models match humans in those areas? Let's hope it takes some time.


How much does this particular result change when running in release mode?


Depending on the code I've seen performance increases above 100x in some cases. While that's not exactly the norm, benchmarking Rust in debug mode is absolutely pointless even as a rough estimate.


Is there any compiled language that doesn't benefit heavily from release builds? That would be interesting if true.


This can happen in languages that use dynamic constructs that can't be optimized out. For example, there was a PHP-to-native compiler (HipHop/HPHPc) that lost to faster interpreters and JIT.

Apple's Rosetta 2 translates x86-64 to aarch64 that runs surprisingly fast, despite being mostly a straightforward translation of instructions, rather than something clever like a recompiling optimizing JIT.

And the plain old C is relatively fast without optimizations, because it doesn't rely on abstraction layers being optimized out.


Julia, for example runs by default with -O2 and debug info turned on. It's a good combo between debug-ability and performance.


On my machine, running the debug executable on the medium-size dataset takes ~14.5 seconds, and release mode takes ~0.8 seconds.


do you know why debug mode for rust is so slow? is it also compiling without any optimization by default? it's it checks for overflow?


The optimisation passes are expensive (not the largest source of compile time duration though).

Debug mode is designed to build as-fast-as-possible while still being correct, so that you can run your binary (with debug symbols) ASAP.

Overflow checks are present even in release mode, and some write-ups seem to indicate they have less overhead than you’d think.

Rust lets your configure your cargo configs to apply some optimisation passes even in debug, if you wish. There’s also a config to have your dependencies optimised (even in debug) if you want. The Bevy tutorial walks through doing this, as a concrete example.


That's not right, Rust only checks for overflow in release mode for numbers where its value is known at compile time. In debug mode all operations are checked for overflow.


Integer overflows can be enabled in release mode by modifying your Cargo.toml with

    [profile.release]
    overflow-checks = true
IMO it should have been the default.


Aahh, my bad. TIL.


Yes, optimization is disabled by default in debug mode, which makes your code more debuggable. Overflow checks are also present in debug mode, but removed in release mode. Bounds checking is present in release mode as well as debug mode, but can sometimes be optimized away.

There's also some debug information that is present in the file in debug mode, which leads to a larger binary size, but shouldn't meaningfully affect performance except in very simple/short programs.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: