It should be possible to call a GPL library in a separate process (surya can batch process from the CLI) and avoid GPL - ocrmypdf does this with ghostscript.
Sora is not available, this looks like an attention grab by an app without relation to OpenAI trying to get downloads by false association. Same thing as a year ago with all the ChatGPT iOS apps that came out before OpenAI released theirs.
Yes, this video is made with Sora - but this link leads to a youtube video that claims it was created with "KaraVideo.AI's (powered by Sora)" and claims it has "Sora early-access" - which is dishonest and misleading.
This is not a tenable position. Most cars are older than these devices- and even big tech co's like Apple were late to patch flipper vulnerabilities. I was on a plane last month and someone was flipper-jamming DOS-ing via continual bluetooth connection requests and completely bricked all iOS devices in range for the 4 hour flight.
These sort of devices are nuisances with very low positive utility, and there is plenty of precedence for banning them.
This is a classic ship of Theseus problem and TFA has a really hostile tone that to me indicates either a lack of exposure to the topic or an intentionally obtuse take as a form of clickbait.
TFA argues that digitizing one self inevitably leads to two copies, and the digital one is not real.
Let’s take a step back and start by replacing/simulating one neuron digitally. Is this a new you? I doubt TFA would argue so. Now replace 3,000 neurons every second over the course of a year and now your entire brain has been digitized. At which point did you stop being you?
The options are, 1) every second you are a new you, with or without digital replacement, 2) every second you are the same you with or without digital replacement 3) every second you are a new you with digital replacement but the same you if you don’t digitally replace. 4) some arbitrary % that feels right like 50%.
Option 3 hinges on a special differentiation between digital and chemical computation, which is at odds with fundamental properties of Turing Machines. Option 4 is as hand-wavey as it gets. Options 1+2 while they seem very different, ultimately can be treated as functional equivalents with slight philosophical differences.
There are real debates to be had over the morality of mind uploading and the very real risks of doing so, and many great sci-fi short stories covering this topic. Unfortunately, this article is too caught up on its hot take to make room for a proper consideration beyond its knee-jerk reaction.
I'm glad you brought this up! Every time the subject of transfer of consciousness comes up this is a drum I beat endlessly. We need physical continuity for any sort of meaningful transfer to happen.
I think that it actually could work as long as the parts that we are integrating into our physical brain are able to be integrated naturally by our neurons and glia and allow our networks to begin offloading computational work to the new parts. Doing this slowly over time to ensure full integration should work as long as the new pieces are designed properly. It's really a matter of how our brain would offload what it currently does to the new hardware, which we know it can do within itself thanks to the study of plasticity mechanisms
If we go the Ship of Theseus route, that should theoretically be a way to preserve our awareness, our "self"
Some sort of transfer of data or copy wouldn't work because our awareness is still with the original
I've undergone some procedures such that I've been put under more than a dozen times in a short period of time. It really hammered home the point (to me) that… the continuity is an illusion in at least some situations. When you wake up, the continuity comes from memory. Fuck with the memory, and you really fuck with the sense of continuity.
One example: at one point I woke up with utter certainty that nothing unusual had happened in the last hour, and I was rather peeved at how long it was taking to even start with the preliminaries. As in, my sense of continuity was from a moment significantly before the last moment I could later remember (!)
That's only perception of continuity. I'm specifically speaking about physical continuity. Continuity of our physical system which has constant internal communication, this internal network dialog does not stop even when under anaesthesia.
Our continued awareness comes from being the same physical system when we wake back up even if you don't have memory of the events.
> Now replace 3,000 neurons every second over the course of a year and now your entire brain has been digitized. At which point did you stop being you?
Slowly, over the year, I would have died. You are describing a brain wasting disease.
I'd ask it another way: if somebody could replicate your brain's neurons outside your skull, let's say inside a super-strong nuclear-powered titanium robot with x-ray vision, at which point are you happy to shoot yourself in the head and let your superbot digital counterpart continue to "live your life"?
Or is this digital counterpart not supposed to take up your mantle until after you die naturally?
If you try to prevent your digital clone from acting independently, can it sue you for it's freedom? Can it accuse you of false imprisonment or coercive control, and seek help? (There are plenty more Blake Lemoines out there who would take up it's cause against you)
If "digital you" hasn't yet won its freedom to act as an independent person, and it then commits a crime (eg wire fraud or conspiracy to commit murder via bitcoin hitman) does "real you" go to court or to prison?
If "digital you" doesn't win it's personhood and "real you" gets killed in a freak accident before signing a will that gives yor digital clone the right to inherit your personhood, do they have to become someone else?
That's assuming it was possible to connect the digital brain, neuron by neuron, to the organic brain and exactly replicate the chemistry at the interface. Far beyond present technology.
But supposing that's solved, consider building the same digital system without destroying any part of the human brain.
Whether you digitize the whole brain in an instant or 3000 neurons/second, the digital copy is obviously a copy if the original still exists.
So how does destroying the original turn a copy into not a copy?
Your testimonial section of "happy customers" are just the stock headshot photos from this wix template. It's probably safe to assume that the testimonials themselves are fake as well? I'd suggest removing this section entirely as it comes off as dishonest and misleading.
Thanks for the feedback, this is an embarrassing oversight and was not meant to mislead. The testimonials are from actual Ancana buyers but we did use the Wix stock photos. Andres and I are non-technical founders and tried our best to stand up this Wix site as quickly as possible while we worked to develop a new website. Getting customer photos is a challenge, but we should not have made it look like these were our actual buyers. I've removed them while we fix this and apologize for the misrepresentation. And it won't happen again.
Very dishonest indeed. Does YC not do any due diligence anymore? They seem like they’ve become a quantity-over-quality investment firm now, casting the widest net possible based on the strength of their reputation.
Seriously. Too many SaaSes getting greenlit, seems like their board of investors realized that rent collection is far more profitable than making good products.
Graph convolutions are really powerful for handing structured data like chemical compositions. With the right corpus, I think this area is ripe for unsupervised feature representation learning approaches like what we've seen with BERT-like approaches and how they've dominated NLP in the past few years.
Side note: I worked with Kyle a few years ago on the MIT-MGH Deep Learning for Mammography project. I'm glad to see his work + brilliance being recognized.
On the contrary, with a more generous reading of the previous comment, it holds some merit.
1. CNN's are used fairly commonly for sequence tasks nowadays. Convolutions can be 1D after all.
2. It's also possible the previous comment was referring to using 2D convolutions on the spectrogram of the audio, which is a common approach.
3. Neural networks are capable of more than classification. Scoring is a regression task which is common application of neural networks.
I have some questions (mainly to improve my own understanding):
2. Since data is MIDI-encoded, would a convolution hold any merit here? I suppose you could render to an mp3 and analyze the audio itself but that seems very computationally expensive and prone to overfitting.
3) If we're training a scoring classifier, we would need labeled data, but getting those labels seems very challenging, not least because of how subjective our impressions of melodies can be (for instance, the opinions of a fan of atonality would be drastically different from a fan of pop). Do you have any ideas on how to mitigate this?
do people really run 2d convolutions on spectrograms? that seems rather backwards to me - why convolve in frequency space when you can just multiply in the time domain.
re regression tasks: sure I guess that's just an embedding basically right?
Yes, I meant to run 2D CNN over generated spectrograms. We do something similar to classify some specific emissions in RF with good success. As for scoring/classification, you can start by having an output of CNN say whether the song is catchy or not.
why run a CNN over a spectrogram? besides what I said (convolutions in frequency domain are multiplications in time domain) an FT is linear. if classifying using those features were effective then your CNN would've learned the DFT matrix weights from the original signal.
A spectrogram has time on one axis and frequency on the other, so the ultimate result is a multiplication in one dimension and a convolution in the other. It can be used to show things like when a note starts and stops in a piece of music, which is difficult in either purely-time or purely-frequency space.
Also, it’s computationally intractable to individually train 2^N weights. What a CNN does instead is train a convolution kernel which is passed over the whole domain to produce the input for the next layer; by operating in frequency space, it’s considering the basis functions e^{j omega +- epsilon} instead of delta(x +- epsilon)
my mistake i didn't realize spectrogram and spectrum were distinct objects.
>Also, it’s computationally intractable to individually train 2^N weights.
that's a good point - i'd forgotten for a moment (because i'm so used to cooley-tukey fft) that in principle getting the spectrum involves a matmul against the entire vector. which brings up a potentially interested question: can you get a DNN to simulate the cooley-tukey fft (stride permutations and all).
Founded + lectured MIT's deep learning course (6.S191), contributing writer to O'Reilly's Fundamentals of Deep Learning book, deep learning consultant for Fortune 500.
Down to chat about and give advice about anything in the machine and deep learning space! Interested to hear what you're working on! :)