Hacker News new | past | comments | ask | show | jobs | submit login
DIY Camera Using Raspberry Pi (ruha.camera)
222 points by the_arun on April 19, 2021 | hide | past | favorite | 122 comments



As a prof seeings students waste time scanning their homework, one gadget I want to build is a easy to use scanner, with a Rpi Zero and a cheap camera.

The idea is to have the camera with a wide angle lens attached to the top of stack of papers/notebook. Pressing a button takes a picture, and then does a affine transform (?) on it get the perspective right, then uploads it to the LMS.

Just a clean, hassle free experience. No orienting your phone awkwardly, no getting the lighting right, no mini edits on your phone, no compiling pdfs.

Also, very useful for me, as I like to do math by hand.

Edit: Image https://imgur.com/Fb45H7M.png The fact that camera is fixed relative to paper means the transform is also fixed. Multi-colored led (and knowledge of paper color) can also fix lighting.


I believe it would a homography transform, which is less constrained than an affine transform, as it does not preserve parallel lines/angles. Instead it maps lines to lines as a way to adjust perspective.

https://en.wikipedia.org/wiki/Homography


Can you just tilt the camera sensor to get a transformed image? I remember reading that box cameras allowed you to tilt the film image plane to match a skewed object plane in closeups.


You probably could, but seems a bit excessive to use hardware to solve this problem.


Well it removes the problem entirely from the software domain. If it’s a single purpose imaging device with a fixed geometry like OP suggested, it’s the natural solution.


To calculate the angle of the camera you'd still have to find the corners of the paper using some simple computer vision.

You could also just use the corners to compute the homograph transform.

Really depends on the constraints too. If you're going for archival quality scans then maybe it would be better to do it with hardware. At that point I'd just get a way higher res camera than needed, transform and down sample as needed. Would still probably be cheaper than a gimbal.


I'm currently an undergraduate math student, and had similar frustration, but I reached a different solution. I built a cheap chalkboard in my room (hardboard panel that I painted with chalkboard paint – $15 total since I used a trash-picked board). For scanning, I mounted a Raspberry Pi with a camera, and did some basic OpenCV image processing to extract the image. All told, it took a weekend to build and set up, and I have used it every day since.

This same setup could easily be mounted at a different angle for scanning papers laid flat on a desk, and would work with nearly no modification.

The image processing pipeline for me was:

    - Gaussian blur
    - Brightness/color thresholding
    - Finding and simplifying the contours
    - Computing the convex hull for each contour (in case the board is partially occluded)
    - Using a heuristic to pick the contour most likely to be the chalkboard
    - Extracting corners from the contour and doing a "getPerspectiveTransform" followed by "warpPerspective"
    - Sharpen the image (subtract Gaussian blurred version)
    - Return the image via Flask server so I can pull scans onto my phone or computer over LAN
The same pipeline is reused for a live feed (MJPEG) that I can split-screen with my webcam (via OBS) for collaboratively working on homework problems via video call. The live stream is accessed via a different Flask route that calls many of the same helper functions.

The code is not yet publicly released, but was very straightforward to write; each of the steps mentioned above is pretty much one OpenCV function call. Here is a sample scan and a demo video:

https://www.dropbox.com/s/h21ke7cnlwjs9wi/2021-03-24_01-37-1...

https://youtu.be/l66G8d-CrnU

My email can be found on my website (linked via my HN profile) if you want to discuss!


Thanks. This is very nice. I will do some experiments with my phone camera at the angle I indicate and might take help from you for OpenCV (been 10 years since I used it in undergrad).


wonderful


Phone scanners are already there.

Auto scanning with border and angle detection works decently well [0], mini tripods with phone brackets abound. It’s very easy to set a phone above the table at a slight angle (that makes it easier to move the docs in and out and get rid of shadows) with the app auto scanning every sheet that’s laid below.

[0] I used Scanner Pro by Readle, on iOS for reference


Just in case someone didn't know this and finds it useful: the system Files app on iOS can also scan documents to PDF, including perspective correction. I believe the Notes app can do that too.


The notes app can definitely do this and works quite well. It’s never worth the trouble of taking out my scanner anymore. If you have decent lighting and a contrasting background for the document it almost never requires refining for me.


I use this as well and it generally works great, but I did recently have the county recorder's office reject filing some paperwork for my deed over the image quality, though. Had to go to Kinko's and find a real scanner.


Notes app also does OCR and indexes the scanned content.


I didn't know this. Do you know how long that feature has been in place? I was uploading my screenshots to a completely different app for OCR until recently.

I've changed my whole process now so it's not nearly as critical a step as it once was. But I still use it in passing.


They work ok, but are fiddly. I think OP is looking for something more bulletproof with a better workflow.

Come to think of it a frame with a black base and an overhead light incorporating a phone holder would likely do the trick well.


It will technically work. But do you think a student who still wants to appear "cool" is going to whip out the contraption you mention in front of their peers? There is more to solutions to problems than the technological possibilities.


Sure, but that applies at least as much to the original solution.

And this seems more of a 'on my desk at home' sort of problem space than a 'whip it out of my bag' sort.


Having built a lot of rpi projects bulletproof is the last word that comes to mind though. I mean the software end is usually fine if the sd card doesn’t get corrupted but building physically robust devices is a serious challenge.


Agreed; to me the problem isn't the phone and imaging part (which replacing with rpi will probably just make worse) but the lighting and geometry. On top of that a purpose-built workflow might be nice, compared to something like scanner-pro. tailored to emailing assignments to your profs or whatever.


Funny how an Apple consumer is advocating for a "fiddly" solution. I only use linux, but I was actually inspired by the Apple philosophy in this. A one-button, no-nonsense solution.


I have been in and out of the Apple ecosystem, now with a foot in both sides. In this specific case, I still prefer the iOS app over the alternatives I've been trying on android for instance.

I think there is something to say about the sheer money size of the iOS ecosystem that allowed talented devs to get a decent ROI, even on niche apps.

I am happy with an android phone, but only because I also keep other iOS devices around for when I really miss some apps. And it seems a lot of people are on the same boat, looking at iOS apps as unitaskers that they'll chain and shoehorn into workflows that fit their needs, more than as "just works" magic tools.


Microsoft Lens, best app i encountered for scanning documents/images. In combination with tasker that checks the office lens folder and automatically uploads it ( Android)

https://old.reddit.com/r/tasker/comments/aqyzbu/help_task_th...

You could use foldersync for that too.

Actions:

- start app

- take picture for every page and confirm

- save to gallery in the end. It will store to Pictures/Office lens

The task will automatically upload the files.


I do scanning to PDF just with Google Drive on my phone. You can also buy (or DIY) a scanning box like this guy: https://smile.amazon.com/Scanner-Bin-Document-Scanning-Solut...

I've seen models that also have LED strips for even lighting. What I haven't seen is a gadget that can feed multiple sheets easily.


Notes that ships with iOS actually has a nice document scanner built in that uses the phone camera ... I mean it does the transform you describe to get better-than-normal document photos.

Too bad it then simply adds as an attachment to the note and doesn't allow you to go straight to Share.


There are other apps that do similar things. When I was in college a decade ago I was using "CamScanner", which seems to still be around (though I feel like I read recently that it got bought and the new owners started doing something sketchy)

These days I just use the regular camera app. Documents are plenty readable as-is, and you can straighten/adjust 3D perspective/raise contrast using the regular photo editing features (at least in iOS)


Just tested this. It takes four click to get the the share screen after a capture. It does not requiring switching apps so it is fairly quick: 1. Click Save in bottom right (segues to note) 2. Click scan in note (segues to edit context) 3. Click share in top right (open share menu) 4. Click method of share for pdf (sms, email, copy)

But wait! You can use the press and hold (3D touch / force touch?) to skip no steps! 1. Click Save in bottom right (segues to note) 2. Click and hold the scan in note (3d touch pop up) 3. Click share in the pop up (open share menu) 4. Click method of share for pdf (sms, email, copy)


Google Docs app has this built in as well, at least on Android. If you hit + to add a new doc one of the options is Scan.



Yes, phone apps work, but they still require several clicks to get things right, and upload. But if your camera is fixed relative to the paper, then there is a single known transform that will correct perspective. If you write software to integrate directly with the LMS, then it will only require one hardware button press per page, to upload the assignment.


Really the transform isn't as much trouble as getting a uniformly lit area with good contrast.


Yes, that could be problematic, hence I incorporated the LED in my proposal image.

As we are working with standard paper, we know its color (as it appears to the camera in neutral light). We could have a calibration step, where the multicolored LEDs are tuned to get the right contrast and color balance. Though, I have no idea how well it will work in practice.


The light would be stronger closer to the source. Better to have a stand like the Czur scanners, and the pi on top


Seems like to get it to work at such a sharp angle, you would need a very large depth of field with your lense in order to keep both the nearest and farthest portions of the page in focus.


I do have a phd in a theoretical quantum optics, so no, I have no idea if such a weird angle will actually work in practice given the physical and budgetary constraints. :-D

Seriously, though, I have idea. We can take two pictures focusing once the near part, and once on the far part.

If cameras are cheap (I haven't researched), maybe two on either end of the device would mitigate the complexity of the lenses required.


Depth of field isn't really an issue with wide angle lenses for smaller format sensors. The EFL is shorter which boosts the depth of field. The bigger problem is field curvature if the page is closer than ~6" - most of these lenses weren't designed for short object distances.

For example, this 1.8mm F/2.8 low distortion lens has a hyperfocal distance of 0.2m (~8") on the Pi HQ: https://commonlands.com/products/low-distortion-m12-lens and easily covers 6"-25" depth of field.


A module on making pictures with the phone would have fewer dependencies. Most of the problem is simply not understanding the problem space. Humans have had less than two hundred years dealing with it and only a decade or so since everyone had a camera in their pocket all the time and about a year of scanning photographs for academia.


Clearscanner on the phone does this -- my kids use it for their homework. Snaps pictures, finds edges, does affine transform, corrects exposure, and assembles into PDF.


I went into the app store to see about this app, having not heard about it yet. The app's privacy policy says it uses Firebase (Google) Analytics, which transmits information about your use of the app off the device without consent.

This, to me, means I can never trust it for scanning sensitive or private documents, because who knows what it's sending. I declined to install it. YMMV.


You might like OpenNoteScanner https://github.com/allgood/OpenNoteScanner


This is in the context of scanning homework assignments.

Not to mention-- Firebase tracks what, events? You think somehow it's encoding the documents in an event stream to exfiltrate them?


An app that shares my event data silently and without consent cannot be trusted with even low-grade private information. Why would I ever allow it on a device that handles high-grade private information?


> event data silently and without consent

It's neither silent nor without consent since you were told about it.


> An app that shares my event data silently and without consent

You're complaining about a -disclosed- use of analytics.

It seems like you'd be happier with an app that just didn't mention the use.


It's not without consent nor is it silently. You consent when you decide to install it and it's not silently if they just told you that it happens.


Can you not just block its internet connection?


Not conveniently on iOS.


Ah, that's unfortunate.


Towards the end of my math degree, I started doing written homework assignemnts in Inkscape using the pen tool and a Wacom tablet. It worked surprisingly well.


I agree... the problem is that scanning can be such a pain!!

I've got a growing backlog of documents to scan. So... I've decided to make it a project and am building my own scanner control app [i] (using python) to help get them scanned (via ADF-doc feeder) and into my paperless-ng repository more easily (e.g. use QR codes to separate documents automatically and OCR to help pre-tag them before uploading).

[i] https://i.imgur.com/sG1Bldx.jpg


I have thousands of photos of homework from when I was in school stored in Google photos. I always took pictures as a backup but the stuff I submitted I always scanned using a real scanner because it was hard to get a perfectly parallel photo with good lighting.


I use a ReMarkable for essentially this. It works really well for me. I'm not a teacher or a student, I just like keeping my notes by hand. Also any time I need to fill out and sign a doc I put the PDF on the ReMarkable, mark it up, and send it back.


Do you not accept phone pictures of homework? That's what I used to submit and the clarity was enough for TAs to be able to readily spot mistakes.


Of course I do. But students are probably taking like 50 pictures a week for their various courses. I don't think they like the tediousness of it.

Plus with this, I could look at student solutions much easier during class or office hours.

Similarly, I don't want to whip out my phone to take 30 pictures of my research notes one by one.


Might be useful in a place where no one has smartphones but otherwise why not use the smartphone camera that people already have?


Dropbox app does it free.


we call it a bookmark 2.0. ;)


I really really really wish RPi would come out with a real HQ camera with at least an APS-C if not 36x24mm sensor. This dinky little IMX477 sensor is MQ, not HQ.

The uses would be endless, ranging from astrophotography to real professional-quality shots with homemade, scriptable cameras; shots synced with high-speed motion; a whole new era of homemade SLRs and mirrorless cameras that are actually competent; lots of things I can think of.


APS-C sensors are still significantly more expensive than common sensors like the IMX477. Demand would be much lower due to the high price for the sensor and additional cost for lenses and mounting hardware.

RPi foundation isn't really targeted at the niche, high-end, low-volume markets that demand the best of the best. They're great at delivering an excellent compromise of cost and quality. The IMX477 is just right for most use cases.

The sensor alone doesn't deliver 100% of the result of a modern SLR. The image processing code does a lot of heavy lifting that is mostly proprietary. The open source options are improving, but connecting an APS-C sensor to an R-Pi wouldn't automatically give you a DIY SLR competitor.

You'd spend a lot more in the process and get worse results than just buying a commercial SLR camera and running open-source scriptable firmware like Magic Lantern: https://magiclantern.fm/


APS-C sensors are maybe only a few hundred bucks.

I would love to have a nice APS-C or full frame board-level camera that can be easily integrated into something like a Raspberry Pi. At present it is nearly impossible to get something like that --- dev kits for large sensors from Sony Semicon, Canon, and ams are profoundly expensive, only sold to companies, and go through a lengthy quoting process.

> The sensor alone doesn't deliver 100% of the result of a modern SLR. The image processing code does a lot of heavy lifting that is mostly proprietary.

This is HN, hackers like us love to be able to tinker with the heavy lifting. That's the whole point of the Raspberry Pi cameras. Even with the current HQ camera, a used point-and-shoot camera with a similarly-sized sensor (or even bigger sensor) would tend to produce better images and videos while being more compact, robust, convenient, and cheaper than the DIY camera.


APS-C might be a tough ask, but something like this[1] or this[2] might work - smaller sensor but at least you won’t get a ridiculous crop factor on APS-C lenses. The second one says it’s format is APS-like.

But I’d say an RPi alone would be a bit tough to handle something like this. You’d need at least an FPGA to control the sensor and maybe something like RPi for UI and further processing.

But it’ll be expensive. These components are anything between $500 and $3000, and that’s _without_ the FPGA and other things that would be required. An equivalent crop frame camera might be only $1000(?). If the aim is to have a DIY hackable camera, sure. If you want a cheap camera, no...

[1] https://media.digikey.com/pdf/Data%20Sheets/Canon%20USA%20PD... [2] https://ams.com/documents/20143/36005/CMV12000_DS000603_3-00...


> At present it is nearly impossible to get something like that

So in the astrophotography world they exist. This for example:

https://astronomy-imaging-camera.com/product/asi6200mc-pro-c...

uses an IMX455 full frame sensor, gives you a reasonably hackable full frame camera (not that well supported, but there are grassroots libraries around) but it's $4000 because it's also a cooled camera.

If they had a version of it that's not cooled and just for normal photography for $1000 that'd be awesome.


> APS-C sensors are maybe only a few hundred bucks.

Where are you sourcing APS-C sized MIPI CSI sensors for a few hundred bucks in low quantities?

Larger sensors tend to use SLVS-EC interfaces rather than MIPI CSI used in the Raspberry Pi. The Raspberry Pi doesn't even support full 4-lane MIPI-CSI (except on the CM4). It's limited to 2-lane MIPI-CSI.


> Demand would be much lower

Lenses for full frame cameras are super cheap -- you can find tons of old Russian, Japanese, and East German lenses that will work really well. Many of those lenses are built like tanks and can be had for <$100, some <$50. Most of them produce very nice images and aesthetically have much better look than what I see out of these CCTV lenses for the Pi HQ camera. CCTV lenses were never designed for art, and among other things produce horrible out-of-focus highlights.

> The image processing code

Well yes, that's also the point, by having an open source APS-C or full frame camera you can tinker to your heart's content with changing the image processing code.

I use Magic Lantern extensively and there's only so much you can do with it, and it's a pain in the ass to recompile code for it. Having a full-fledged Linux system with gcc, opencv, python, and pytorch at my disposal on camera, and with Wi-Fi, Bluetooth, USB, and running an SSH server, and the ability to connect arbitrary I2C and SPI sensors, would be freaking amazing, to say the least.

Wildlife camera with thermal camera trigger and a neural net that recognizes mountain lions? You got it.

LIDAR-based insanely accurate servo-driven autofocus? You got it.

Microphone array that figures out who in the picture is talking and refocuses the camera to that person? You got it.

Home-made Alt-Az tracker with built-in autoguider and remote Wi-Fi progress monitoring? You got it.

And if it can be made to work with the Pi, someone will hopefully also make it work with a Jetson Nano or Xavier NX and then voila I could do some neural net processing in real-time on-board. I've been able to blow Canon's in-camera denoising out of the water with state-of-the-art neural nets by postprocessing RAW images, and if I had a Xavier or Nano on-board I could easily put those neural nets in-camera for convenience.

The possibilities are endless, which is why I really want this hardware so much.


Everything you described can be built out with this existing sensor and hardware.

You don't need an APS-C or larger sensor to get decent images. Most APS-C sensors use a different high-speed interface that won't work with the Raspberry Pi anyway.

Really, this solution from the Raspberry Pi foundation is a great start for any of the projects you mentioned. It's also cheap and highly available.


> You don't need an APS-C or larger sensor to get decent images.

I don't have the space and time to debate the merits here but there is a reason they exist, there are lots of things you can get by having a large sensor (including a different aesthetic and better SNR for low light images) and I want those things with a hackable interface and programmatic control of whatever the sensor is capable of.

I've been doing photography with full frame sensors for a a decade after upgrading from APS-C and telling me "you don't need an APS-C camera" without understanding why I use a full frame camera or the work I produce with them isn't really helpful.


The thing is, if you want to use existing older camera lenses a large sensor is a hard requirement unless you want an ultra-zoom. Which is why people are having success with the rpi sensor for astrophotography, but this article is about making a portable, everyday camera; I'd quite like to be able to use my old Nikon 20mm as a 20mm, not a... um, 112mm or thereabouts.


Also fix the interface so that you can actually record 4k video. The sensor is capable but the interface is not.

https://www.raspberrypi.org/forums/viewtopic.php?t=281095


RPI HQ camera is almost usable for astrophotography, here are few shots I did with it last year: https://terramex.neocities.org/astro/

Of course it make little sense from financial point of view as good star tracker and optics are magnitudes more expensive but it was fun.

I agree with rest of your post. Even MFT-size sensor would be great as there are plenty great lenses with that image circle size.


I'm just comparing your Bode's Galaxy (f/3.3, 200mm, 120x21s) with my recent attempt (f/6.3, 600mm, 42x30s), and besides magnification, the result seems fairly similar - if anything, I think the dark sections of your image are less noisy. Your arrangement collected about 7.2 times as much light (ignoring magnification), so that's not surprising. I was using a Nikon D7500, so if the RPI HQ camera is comparing reasonably favourably with that, then yes that's not bad.


This makes me wonder what sort of semi-pro astrophotography might be possible using a disassembled small cinema camera (black magic pocket 6k, for instance, or a similar priced full frame mirrorless or DSLR) with a peltier cooler or refrigerant loop heatsink attached to its back side.

How close could one get in performance to a very costly astrophotography setup, for under $10k? Not counting the cost of the telescope/optics in front of it, just talking about the sensor part.


Yeah it's barely usable, but not great, because ultimately larger sensors will still give you better SNR for astrophotography. A sensor with 10X the area will be able to produce an image with the same final SNR in 1/10 the time.

Also I had lots of faint banding issues with RPi cameras. Nobody was able to solve my problem on forums.

https://www.raspberrypi.org/forums/viewtopic.php?t=287866


I've seen similar banding in some tests when I had set analog gain to 1.0 on a warm night. With analog gain set to 16.0 and Peltier cooler attached to back of camera PCB it was gone.

My guess is that IMX477 does some software de-noise that causes banding. I tested this guess by calculating power spectrum of a dark frame and there was noticeable drop in higher frequencies. Unfortunately, even disabling hot-pixel detection did not help. I did not investigate it further.


> Peltier cooler attached to back of camera PCB

Do you have plans or pics of how to do this? Especially how to do it without getting condensation on the electronics?

Thanks!


Honestly, it is just a frankenstein monster of whatever I had on hand: https://imgur.com/a/QhvmbUa

3mm thermal pad I used between PCB and cooler: https://botland.store/thermoconductive-tapes-pastes/6060-the...

Radiator (I used the smaller one that typically goes on cold side of Peltier): https://botland.store/peltier-elements/10024-heatsink-with-a...

Fan is standard 40mmx40mm, there is some generic copper thermal paste between Peltier and radiator. The plywood mount is DIY made with with hole drills. It is a bit too thick so I used 2x C-CS mount adapters (like the one bundled with the camera) to get more space, otherwise I would not be able to screw in lens adapter.

There was a bit of neoprene foam between radiator and fan to reduce vibrations but it looks like I lost it somewhere.

About condensation: it was always wet or even icy in operation but it never caused any electrical issues, I guess condensed water was not very conductive? Once on a cold night (about 5C) I had water condensation on the sensor itself but it did not cause any lasting damage. I've read that putting a fresh silica gel desiccant packet into lens adapter helps. I had plans to attach temperature probe to lens mount and control cooling power form Pi itself, will get back to it in the future. The camera sensor itself has embedded thermometer but it is not exposed in camera driver.


I agree, it would be nice, but is there no way for you or me to construct such a thing using some existing chipset or module and constructing an interface to the Raspberry Pi?


Is there a solid open-source Rpi-based home security camera solution yet? I'd like to set up something of comparable quality to say a Ring doorbell camera or a Nest/Blink security cam or a baby monitor but that I can fully control.


Have you looked at motionEyeOS? I'm not sure how it compares to Nest/Blink but I got along very well with it since 2015 (originally as motionPie).

https://github.com/ccrisan/motioneyeos


Not rpi, but I think there seems to be a pretty solid system built up around the esp32-cam devices using esphome, which are actually quite a bit cheaper and lower power than rpi.

https://home-assistant-guide.com/2020/10/08/building-a-video...


Not Rpi, but much more of an integrated design. The PineCube (from Pine64) is basically designed to be just this -- a home security camera with completely open software.

https://pine64.com/product/pinecube-dev-kit/


That's been a dream of mine for a long time. I don't think there's anything quite like we want just yet.

However, there is a super cool open source project from the author of GKrellM (remember that from the ancient days of Linux?). He's using the Pi's built-in hardware video coder to get high quality motion detection very cheaply. The basic idea seems to be, when the encoder produced a lot of bits, there must have been some motion in the frame.

https://github.com/billw2/pikrellcam


I'm thinking of looking at this...

https://github.com/ccrisan/motioneyeos

...has anyone used it?


MotionEye works very well. Read all the docs first before messing with the settings as they are not intuitive - trying to tweak the motion trigger settings is a bit of a pain (frames threshold, timeout, etc.) but can be done and there is a debug frame setting which is helpful. Works with the Raspberry Pi cameras but also with any USB cam as long as you choose the correct driver. I have an IR nightview USB cam that gives superior night shots and doesn't require additional power outside the USB connection. You can save all files locally or send them to any number of services - I have all my photos and videos dropped into Google Drive. I'm also using DuckDNS for external access to the IP cam port for a real-time view into my home whenever I am out and about. The network stream feature in VLC on mobile works really well for that. The only issue I've come across (and this was ~4 months ago) is that 8GB RAM Raspberry Pi 4's are more trouble than they're worth for this application - software support was very much lacking but the 4GB Pi's work very well.


I use motioneye on a cheap pi zero w to monitor my wife's pottery kiln. It's great! I haven't tested it for storage/retrieval though.


I briefly used Kerberos.io a few years ago (kind of a difficult name SEO-wise), and it seemed to work pretty well.


I've been thinking about just getting off the shelf system like Hivision IP camera and NVR, then run it on a separate network disconnected from internet. It shouldn't be much more expensive, but still I'd probably disassemble cameras to make sure there is no wifi of other means to leak data.


I set up a reolink PoE camera with a PoE injector connected directory to a RPi. The RPi itself is connected to the Wifi and I have homeassistant running on it and the camera itself can generate an event when it detects a motion that homeassistant can do stuff with. I am pretty happy with this solution.


> open-source . . . home security camera solution

i (deeply) reject your inquiry as a harmfully narrow, reductive consumerist ask, when this could be a much more neutral, open ended friendly question. you have qualified this from the start as a "solution," which is against the best premises, the best strengths of open source: "small pieces, loosely coupled."

the best software is small, purposeful, targetted. interwoven with other systems. only then is open source able to continue to focus on innovation & advancement, without becoming mired down in endless maintenance & complexity.

you should re-evaluate your ask, to ask for something that, will, in the end, not becoming limiting & ossified. open source ought best avoid the pretense commercial software competes on, of being a complete and final thing, of being everything to everyone. open source ought be more humble, and for this, it is better. ask, instead of a solution, after what systems of software might help one accomplish the home security systems they might want to build.

the best piece of open source home security cameras that I've seen is Frigate, which has masking & less interesting to me but probably interesting to many, object detection. designed for home assistant but it has other uses. much assembly required. good. solve your problem how you want to solve it: not how everyone else also has to.

i'd point out that home assistant itself is regarded by many as somewhat of an abomination, too big, unwieldy. core has 1.4k issues open and almost 300 PRs open. it's a shit show. it does way too much, it's way too monolithic, the project is basically doomed to stay where it is. it's a solution, and one that will rot & be no better in 3, 30, or 300 years: there's no open future here. it's only upside is that it is a plugin architecture, that it is an interoperation layer, that projects like Frigate can contribute & add capabilities too. home assistant is hugely popular, but basically it's core redeeming feature is that it allows small pieces, loosely coupled, to do something. home assistant itself is an anti-solution. frigate is an anti-solution. they are both parts, pieces, and in that is the strength. and hopefully, someday, we can kind a better system of pieces such that we can get rid of bloody ugly home assistant.

https://github.com/blakeblackshear/frigate


I toyed with this idea, as I find taking pictures with my phone to be an unsatisfactory experience, and carrying around my Canon EOS is annoying as it's too bulky.

What stopped me was: 1) Raspberry PI boot-up time. It's long enough to prevent you from quickly snapping a picture when you see it - you have to be prepared. 2) Battery power that doesn't run down within a couple of days of non-use.


Boot times of 3-4 seconds are achievable with a customized buildroot [1] [2] if you don’t need network, USB, HDMI, and other services.

[1] https://mitxela.com/projects/thermal_paper_polaroid

[2] https://himeshp.blogspot.com/2018/08/fast-boot-with-raspberr...


There are boatloads of 12 megapixel or better point and shoots in the used market that will produce excellent pictures. A much better experience than a phone because it's a dedicated device. And because they usually have a substantial zoom range from wide to tight.

With only a slight bit of patience, you can pick up something like a Panasonic Lumix ZS-6 for $30 on US eBay including shipping. Long battery life, fits in a pocket.

I removed the hot-mirror, epoxied a 720nm filter to the front and use mine for infra-red.

Only slightly larger, the Sony NEX C3 is a tiny APS-C camera, not much bigger than a point and shoot. It was the second iteration of the NEX3. Paired with the slightly more recent 16-50mm pancake zoom, it is a great everyday carry with 16mp.

To me, an rPi camera is an interesting idea and for a specialized use, its programmability would be an advantage. As an everyday camera, it's a bit absurd beyond telling people about it. YMMV.


I think the bit I missed out was, I'd like a way for when I take pictures for them to be automatically synced to my cloud storage (not one of the main providers) and/or published on my website.


Buy a point and shoot with WiFi.

Put a rPi in your other pocket.

Run chron.

At least it's not a pony and free M&M's.


I have to wonder what phone you've got. I have a Pixel 4a 5G and it takes the best pictures out of any camera I've had (usually cheap Canon PowerShot's, which are GREAT cameras). I've been extremely impressed with its abilities. I can't imagine that any homebrewed solution could match it without thousands of hours of work.


I think it's me tbh. Trying to hold the phone, and press the on-screen buttons just isn't a nice workflow for me. I miss using a proper camera form factor.


There are c-mount to Canon EOS lens adapters. Could take this project to the next level!


I tried this with little success. The c-mount is such a cropped sensor that even with the EOS lens adapter you only get the middle of the shot... also I haven't found a good solution for focus and f-stop that is buried inside the EOS pin protocol. I'm sure it's solvable but haven't found it yet.


The big problem is unless you use fisheye lenses like 6 or 11mm full frame (35mm equivalent), the full frame equivalent focal length you get with the HQ camera sensor is practically in the exotic telephoto range, it's a crop factor of like 5.2x.

It's nice for things like astronomy (your nice 200mm f/2.8 is now 1000mm+) or bird/nature watching, but it's highly impractical for anything but specialist portraiture or walk around use.

Most of the c mount lenses available are pretty poor, and getting focus right with anything not way stopped down (especially without a large display) can be super tricky.


Really nice to see gpiozero being used in the wild.

https://gpiozero.readthedocs.io/en/stable/


I would love to see a guide to fitting an rpi camera module into a film camera back, to bring old film cameras into the modern age.


During the quarantine, I started photographing on film, developing the film at home and digitize the negatives using my DSLR. At least for B/W film, the process of developing film yourself is dead easy and I'm happy to have a hobby away from my computer. Also, having a price per picture and only a limited amount of shots helps me actually think about composing nice pictures instead of taking 5 almost identical images and moving on.

In general, film photography is having a comeback. Prices for used film cameras skyrocketed in the last years for a few models.

Personally, I find photographing on film really rewarding. Having a physical product in the end (be it a print of the image or only the negatives) makes the process more enjoyable. So if you have some old film cameras lying around, I can only recommend giving them a try. Maybe there are even old films with old memories in these cameras.


I use film camera lenses all the time on my full-frame consumer digital camera, which is at present a slightly better way to bring 35mm film camera lenses back into the modern age.

The problem with retrofitting a film camera back with a Pi camera is that the Pi camera has a dinky little IMX477 sensor which only covers a small, small fraction of the area that would normally be illuminated on 35mm film, so you would not get very good images at all.

If they came out with a full-frame sensor that plugged into the Pi though, that would be awesome.

That said -- that's for 35mm cameras. Now there are also other film cameras ... I am working on using a Pi camera to scan a large format 4x5 area to bring a Toyo view camera back into the modern age :). It takes a good 15-20 minutes to scan the image and I get gigapixel results. Still a work in progress. Un-doing the effect of CRA optimization on the sensor's microlens array is annoying.

https://www.instagram.com/dheeranet.large/


Cool. I remember when flatbed scanners were pressed into service to make (very) large format images.

Stephen Johnson was playing around with this in the 1990's.

http://www.betterlight.com/field_photography.html

"I initially captured 180 degrees of view in a 6,000 x 40,055 pixel image, but soon learned that Photoshop was limited to opening files with less than 30,000 pixels in either dimension, so I had to perform surgery on the original TIFF file to reduce the image to just under this limit."


I thought about that method as well and after taking apart about 3 scanners it stopped being fun. Scanner assemblies are really hard to work with. Especially that some of them strobe colored RGB lights instead of a colored sensor, some use microlenses, and some won't start scanning if they detect that the light has failed (and you don't want the light for photography, so you disconnect the lights but then find that it refuses to scan). However if there's a hackable linear color CCD that is 4 inches long that I can wire into a RPi that might be super interesting.


Now that is a neat idea. I was thinking about making a film scanner that can auto scan + cut rolls of 120 and 35 but this is way neater. Thank you for sharing!


It would have to vary pretty heavily by film camera model, and many probably can't be nondestructively modified that way. Nikon F and whatever Canon's film flagships were, sure - those are designed to take a motor drive winder, so you can remove the back cover entirely. But my heirloom Nikkormat FTn, for example, wasn't designed that way, and removing the film cover hinge pin would at best be a very fiddly task with a significant risk of damaging the hinge.

That said, and assuming you use a camera module that you can disassemble far enough to expose the sensor, it shouldn't be too hard. You'd most easily I think design and 3D-print a replacement cover with a light-tight fitting to place the sensor on center at the flange focal distance (ie in the designed film plane), and route whatever cables out to where you could connect them. Maybe also a case for the Pi that has a 1/4"-20 screw to mount on the tripod socket, just so you don't have to cram your face past it to get a good look down the viewfinder.

You'd probably have a hard time getting anything like a wide-angle shot. I don't know offhand what sensor sizes are common in RPi camera modules, but I feel like expecting 1"-class would be expecting too much, so you'd be dealing with a pretty fierce crop factor.


I was thinking about this yesterday while looking for resources on building a board to read data from a modern sensor from something like an a7 (turns out its way over my head). The issue you’d run into with the pi hq camera module assuming you can get the focal plane right and everything to fit is that the sensor is way smaller than 35 film so you’ll have a rather large crop factor.


The issue is the difference in “sensor” size. The “crop factor” of the rPi camera is 5.5 so a 50mm lens provides an angle of view equivalent to a 275mm lens for 35mm film.

To get down into “normal” lens equivalent would require something like 8mm. A fisheye would probably be ok because the center tends to be relatively less distorted. But if there are 8mm rectilinear lenses for 35mm film cameras you probably can’t find one and maybe not afford one...Its price would buy a lot of film.

I mean there are digital sensors that retrofit film cameras. Products for Hasselblad 500 series and Mamiya RB have been around since the early days of digital. They have always been less than $100k and still are today.

Anyway if you want to use an old film camera, just buy some film and have a go. They are incredible mechanical devices and a pleasure in the hand and produce that film look naturally.


Heh, with the amount of cropping, you don't need a rectilinear fisheye!


This may be a dumb question, but I haven't borked around with the Pi Camera at all yet.... Why does the lens have a megapixel rating? I'd assume that there's no imaging sensor in it, right?


The rPi camera accepts c-mount lenses. C-mount is a mechanic standard for the connecting threaded couple of lens and camera.

It is not an optical standard.

C-mount lenses project image circles of different sizes depending on their intended use. The initial use was 8mm film cameras. But it was also used by lenses designed to cover the larger super8 film image standard. And today there is at least one c-mount lens that can nominally cover APS-C though with noticeable vignette...anyway...

The rPi camera sensor has a diagonal of 7.9mm. This is about 20% larger area than super8 film.

For a photographer whose opinions of image quality revolve around technical details, such a super8 format intended lens might produce about 10 “acceptable” megapixels on a 12mp sensor.

Most photographers tend to think about image quality in those terms. Limiting the specifications that way heads off complaints about unfulfilled expectations. It is easier to hold strong opinions about technical measures than to consider aesthetic possibility created by a lens’ optical limitations.


> Why does the lens have a megapixel rating? I'd assume that there's no imaging sensor in it, right?

The megapixel rating is an indicator for sharpness. There are more technical ways to rate sharpness, but listing the megapixel rating and sensor size is a quick way to suggest that the lens is sharp enough to resolve pixels that small.


Good question. It looks like the Raspberry Pi HQ camera module (the bit with the sensor) is rated at 12MP. My guess is that the angle of this lens means you can only make use of the middle portion of the sensor, in this case 10MP’s worth of it. And as there’s only one sensor module on the market that this lens can screw onto, they can tell you the effective megapixel rating of the lens. There’s also a ‘wide angle’ 6mm lens [1] available for the same camera module, rated at 3MP, which I assume casts the image on a even smaller portion of the sensor. I could be completely wrong though, not an expert.

[1] https://thepihut.com/products/raspberry-pi-high-quality-came...


I don't think you are correct. For this reason only: I have played with the Raspberry Pi HQ camera module and a pair of lenses for it and the frame-grab software never returned an image with a black (or otherwise) border. From your description I would instead expect a thick black border around the image where no photon data was captured.

To be sure the frame-grab tool could be sensing the lightless border from the image and cropping, but I doubt it's that clever.


Your description of the frame grab software output is consistent with the proposed degree of cleverness.


The lens having an MP rating is likely just saying that the optics is good enough to resolve an image to the detail that a 10MP sensor would capture.

I know it's a shitty way to specify lens resolution, but that's likely what the manufacturer called it, not the fault of the article's author.


Thank you. That makes sense. Yes - that's the description from the manufacturer, not the post author. Lens may be found here. https://www.adafruit.com/product/4562


They are using megapixel to mean "modulation transfer function": https://en.wikipedia.org/wiki/Optical_transfer_function

As many of the replies state, it doesn't make sense. I think the original author knew of the idea, but not the word. Now you can know the word :)


It doesn't make sense, no. The whole article seems very barebones - I'd have expected to see discussion of how the sensor-lens system needs to work for proper focus, at least. Granted they link the STL for their case, which will be designed for correct alignment, but some discussion would still be useful to those wanting to mod for different lenses or sensors.


Correct, the MTF of a lens is meaningless unless the lens and sensor are aligned (aka focused) correctly. Contrast in the image is directly related to the square-wave geometrical MTF. Once you factor in focus/alignment error, it can be difficult to tell the difference between a "3MP" and "12MP" rated lens. We're talking about <10 micron accuracy.

This is why companies like Apple use 6-axis active alignment machines to manufacture every camera module.

The "Megapixel" rating of a lens is based on the nominal as-designed MTF and image sensor Format. There's a lot of manufacturer fudging that goes on with these ratings. And some in China literally print whatever they want on the side of a lens. I've seen the same lens with both "3MP" and "5MP" printed on it.

Here are a few tips on how to focus a camera. Regardless, it'll be hard to get within 2-5 microns with a CS Mount 1"-32TPI thread. https://commonlands.com/blogs/technical/how-to-focus-a-camer...


this is amazing :)


Just buy a used Lumix GF2




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: