I worked for Livescribe from 2008 to 2010. The author of this article describes the Nuwa Pen as a "game-changer" but, as others have pointed out in this thread, other products have done the same thing for a long time. Livescribe's pens captured both handwriting and audio, which was great for meetings and lectures. For a while they also supported an app ecosystem, though the apps' usefulness was never successfully demonstrated to be more than a gimmick.
I still believe there is a niche where a product like this would be very much at home, but Livescribe's smartpens in particular were undone by a combination of bad internal decisions combined with a market that changed from underneath them. Who knows, maybe the Nuwa Pen will be able to target that niche market more successfully. I could certainly find a use for one, given the right combination of price/features.
Does anyone know what this actually does? There's no explanation in the documentation, and the only screenshot is of the build process.
I get the impression whatever it is requires metadata from a digital camera, which isn't present in my photos because I like to shoot film and scan it. My first thought was, "Wow, how is it going to analyse the subject matter and classify all my scans in 'seceonds'?" My second thought was, "Ah, it won't." The author is clear that this tool is based solely on their own workflows, which is fair enough, but I'd at least like to know what those workflows are.
It moves photos between local files, dropbox, and google drive, depending on how you configure it.
From a quick look at the code, so far you can achieve the same with exiftool if you mount those sources as local drives. There isn't much else yet, but it wouldn't take a lot of work to hook this up with a vision models to add some labels or metadata.
For example, I normally run the following to import files from an SDCard to a local folder and organize them by date:
A bit off-topic: I have the same problem that you face with shooting film with Leica and manual lens.
It would be interesting to see if there are tools to automatically populate metadata based on the content:
1) field of view + rough estimation of lens data can be computable directly from perspective cues
2) this can be narrowed down to specific objectives either:
2a) using a short list (all objectives a specific photographer owns), or
2b) using enough lens data + images (e.g. something that DXO or Adobe could do)
While I am skeptical about doing (2b), it should be possible to do (2a). A middle ground is to manually label a few images and use semi-supervised learning to propagate them to the rest of one's photo collection.
> From a quick look at the code, so far you can achieve the same with exiftool if you mount those sources as local drives. There isn't much else yet, but it wouldn't take a lot of work to hook this up with a vision models to add some labels or metadata.
Thanks for walking me through it, and for the handy example with exiftool. I like that you're doing this with command-line tools because you can see what's going on more easily. I think it's more accessible than requiring a Java + Gradle build environment.
> 1) field of view + rough estimation of lens data can be computable directly from perspective cues
> 2) this can be narrowed down to specific objectives
Really! That is so cool, I wasn't aware that kind of thing was possible. I never write down the lenses I've used (because lazy) so extracting that kind of metadata from the image would be an interesting exercise.
Your friends are just poking fun, of course it's a real library. I also have a large number of ebooks (in addition to a 1-2k book physical library) and the ebooks are, paradoxically, less accessible. It's too easy to just download hundreds of books in one go (let's say if you wanted all the original Goosebumps books) and not actually look at them, whereas every physical book has to be obtained and shelved individually. Then ebooks disappear into Calibre where I utterly forget about them, whereas a physical book's presence on the shelf is a constant reminder that it exists and is waiting for me to read it.
Yeah I came here thinking it was about books. Apparently Magic people call their cards a library? Here I am sat with a plain old deck of cards like a complete lemon.
Deck is the set of cards that you bring to a match.
Library is a region of the game state: the cards that are yet-to-be-drawn. This only exists as part of the gameplay. Other gamestate regions are the battlefield, the graveyard...
Strictly speaking doesn't have to be undrawn cards given the myriad of ways you can put cards from other zones into your library, those cards may or may not have been Drawn from the library as well.
the library is your collection of cards from which you can draw and from which you do draw every turn.
It's not your entire compendium in and out of game, it's simply the deck of cards from which you (usually) draw and is distinct from your hand, graveyard and banished cards.
I came here to say exactly this! I got thrown by the second image right at the top before the article has even started: "K(AW)MPL(E)KS". All well and good, if you want to write in an American accent.
On a related note, writing accents is usually a bad idea. I remember reading Asimov's "Foundation" when I was a teenager in Australia. One of the characters is a lord and speaks like this: "Ah, Hahdin. You ah looking foah us, no doubt?" Because of course everyone in the future is American and everyone who's not American speaks with a comical (and completely unreadable) accent. Trying to read that dialogue makes me feel like I'm having a stroke.
I'm doing a master's in library science and archives, currently working a couple of internships processing archives. The answer is archives are big, complex, and time-consuming. One collection I work with is 131 cubic feet of records including papers, floppy disks, and photographic film. It's unprocessed, meaning the archivists haven't had a chance to arrange and describe it, which isn't a wonder considering the size of the collection — and that's only one of many in the backlog.
Even if a collection is processed, because of the volume of information in a given collection archivists typically don't typically describe every document. In a library you can catalogue every book, but that's not possible in an archive. And in an artist's papers, how can you know which document will be important to someone? How can you know what's artistically significant? The time it would take to research the background of every document (Was this script ever made? Is it interesting to anyone?) would be prohibitive.
Add into the mix that archives are chronically underfunded and archivists underpaid. This is coming from the unpaid intern who was asked to process a $33,000 acquisition last year. Fun times.
For comparison a regular French-door fridge is about 25 cu ft. So 131 cu ft is equivalent to about 5 fridges’ worth of materials. Not that one would store an archive inside fridges :)
I have one of these too! Although you'll probably never hear it played back on the correct equipment, you can fudge it at home with a regular turntable and a bit of wizardry on the computer. You need to digitize your turntable's output in stereo, invert one track, mix it down to mono, and speed it up to 80 rpm. The results aren't ideal but unless you have a time machine or a music historian on hand it may be the best you can do.
A friend in high-school used to do the old trick of copying a DLL onto a floppy disk, renaming it "essay.doc", and handing it in to the teacher. Next week he'd feign concern that the disk must have got corrupted somehow, but by then he'd bought himself an extra week to write the paper.
I had a QuickBASIC program that'd spit out a generic disk error screen, and later a Visual BASIC program that'd put up a Windows dialog saying that it needed to be open on NT 4 or later (school system was Win9x at the time). Good to hear we all slacked off the same :P