Hacker News new | past | comments | ask | show | jobs | submit login
Piipcam – 1080P IP Camera on a Raspberry Pi Zero W (github.com/sepfy)
213 points by mattcourant on Nov 1, 2020 | hide | past | favorite | 84 comments



It amazes me, really, that for as popular as this use case seems to be there isn't a simple, well-packaged way to stream a camera off a pi.

I've been trying to get this working lately and nothing works "well". Motion is popular, but is limited to mjpeg rather than h.264. uv4l is closed source and not up to date with current OS versions. None of the various cvlc incantations to do better actually seem to work on my pi zero w, etc.


I’d like to plug my project, APStreamline for just this use case (https://github.com/shortstheory/APStreamline). It supports network adaptive streaming which adjusts the resolution and bitrate of the H264 video streamed from the Pi camera depending on the quality of your network connection. You can then use an RTSP player such as VLC to view the video stream. It works great with other types of cameras too!


Took a look at your GitHub link, looks like you put in quite a lot of work.

Looks very interesting.

When you say "works great with other types of cameras", could you explain? You've tested with other cameras and could you tell us which?


Thank you! The list of cameras for which I have added special support are:

* Logitech C920

* Raspberry Pi Camera

* e-Con AR0521

* ZED2 Depth camera (in V4L2 mode)

In case your camera is not one of the above, APStreamline falls back to requesting an MJPG stream from the camera and then encoding it to H264 that using the x264enc software encoder. The software encoder has good quality but it requires more CPU power.


Thanks for the in-depth reply.

Is there some process you setup on Github for others if we test with our own cameras and let you know?


Yes, the way to test your own camera is to connect it to your computer and launch APStreamline. As the sibling comment suggested, feel free to add a GitHub issue if you want your camera to be supported. In case you want to DIY support for your camera, the steps here https://arnavdhamija.com/2020/10/29/apstreamline-v2/ have an outline of what to do.


Will do thank you


A pull request?


Yeah wasn't sure if they had a process to it outside of pull requests. As in some other channel or method for hardware. But you're right, pull requests it is they said.


Can this stream h264 directly from a Logitech C920 rather than decoding then encoding?


Yes! This uses the H264 stream encoded by the C920 so you will be able to get an HD stream even on low powered devices such as the Raspberry Pi.


This is sadly 'just' homekit, but I was pretty impressed with the polish of the solution:

https://hochgatterer.me/hkcam/

I agree on the comment for rtsp though. Most solutions have multiple seconds of latency which is a bit suboptional.


This is really nice.


> I've been trying to get this working lately and nothing works "well". Motion is popular, but is limited to mjpeg rather than h.264. uv4l is closed source and not up to date with current OS versions. None of the various cvlc incantations to do better actually seem to work on my pi zero w, etc.

Since you use motion, I suppose you want to have some kind of detection going on.

I was toying with 2 pi0+cam and one pi4 some weeks ago.

Here's my conclusion (I should write a blog post about it):

- use a pi0+camera dedicated to live streaming ; that's the one you want to connect to to see what's going on (fixed IP, RTSP stream)

- use another pi0+camera or add an IR sensor to the first pi0, in the same spot, to detect motion events (either through motion installed on the pi or the IR sensor)

- use a third connected pi4 for continuous recording of the live stream (in chunks of 5 minutes) to HDD/SSD and/or recording of the live feed once a motion is triggered (you can use motion hooks to trigger API on this pi from the pi0)

- you could also simply do motion detection on the pi4 but you are at the mercy of artifacts in the stream and they WILL trigger motion detection ; that's why you want to do motion detection closest to the source

Motion introduces too much latency to use as a two-ine-one "detect and live stream" (MJPEG conversion takes a lot of CPU clock and adds artifacts).

Most motionEyeOS tutorials I read are PoC that makes you install motionEyeOS on every pi and use the motion MJPEG stream instead of an h264 feed from the camera. It also introduces a lot of CPU bottlenecks and unreliable network connectivity.

I found that motion web UI is now enough for live streaming of what motion "sees" but motionEyeOS helps understanding many of motion options. It's especially useful to draw masks. And then you move on to building your own infrastructure with those bricks (http API, live streaming, hl264 streaming, etc.).


It seem this setup get close to the Jetson Nano 2Gb price range ($59), which from my experience is order of magnitude faster than rpi 4 for computer vision/video processing tasks. Is there any advantage of using Motion over nvidia-deepstream?


I don't know much about nvidia-deepstream. Motion is a stand alone and ready to use program that can take multiple streams, detect frame to frame differences with different settings. Nvidia-deepstream seems to be an SDK that requires more specific AI/ML knowledge to set up motion detection.


Check out rpos (https://github.com/BreeeZe/rpos), it turns your Pi into a compliant enough Onvif camera that you can use for NVRs.

Both rpos and piipcam serve h.264 RTSP streams that you can pull into OBS and turn into a virtual webcam.


I ended up running

  raspivid -l -o tcp://0.0.0.0:3333 --mode 2 -b 3000000 -t 0 --drc off --flush
looped in a screen session on the rpi (running raspbian), and

  mpv --profile=low-latency tcp://192.168.1.100:3333/
on the client whenever I wanted to see the camera. It wasn't polished, but it got the job done. It died probably due to SD card flakiness, and then I had to move the camera anyway. Next incarnation will be nfsroot.


Gstreamer? I've used it in the past, both professionally and personally, to generate rtsp/rtp and it worked great.


I second this, using gst-launch, you can run a gstreamer pipeline in one line in the terminal that generates a hardware h264-encoded stream that you can tune to any bitrate. Something like this:

  gst-launch-1.0 rtpbin name=rtpbin rpicamsrc preview=0 bitrate=4000000 do-timestamp=1 ! 'video/x-h264, width=1920, height=1080, framerate=30/1,profile=high' ! h264parse ! rtph264pay config-interval=1 pt=96 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink host=YOUR_IP port=YOUR_PORT rtpbin.send_rtcp_src_0 sync=false async=false
Gstreamer should include rpicamsrc directly nowadays for your convenience.


I'd be love to hear more about how you did it! I found streaming (and restreaming) RTP fairly straightforward but annoying to setup with gst-launch (there seems to be no simple option to get an SDP file out, apparently you are meant to lovingly handcraft one from the debug log).

But I couldn't figure out how to stream RTSP, especially not in a way that would preserve NTP timestamp information when restreaming (for synchronization with other streams). There is gst-rtsp-server, but as far as I can tell the idea is that you use it as a library to build your own custom server code.

Is there a simple way I was missing?


This is the comment I was scrolling for. GStreamer is awesome.


Same, have been looking for a slightly different use case: Using RPi as a video capture card that takes an HDMI input and outputs H.264. Even better would have been to implement it as a UVC device that can be plugged in as a video source.

I was surprised that gamers haven't implemented this yet as a DIY Elgato project.


There are several entire companies whose core product is a security camera that cannot get a security camera to work "well". At least 50% of it is making the user experience smooth and loading times extremely low. You would think that the product managers are responsible for this would take responsibility for it and make sure it works perfectly. However, there are very few great product managers out there. Most of them fail miserably and end up giving even the good product managers a bad reputation. At the end of the day, the guy who wrote this could possibly be just a single poor plonker in his mom's basement who never stood a chance against the most discriminating of users like us.


RPi Cam is very good: https://elinux.org/RPi-Cam-Web-Interface

But it only works with the pi webcam, not with usb cameras


It's not that "well-packaged" but last time I tried it, JSMpeg worked nicely when streaming from an RPi3. It has very low latency (~70ms), reasonable quality/bitrate (MPEG1, rather than MJPEG) and is viewable in any browser.

https://github.com/phoboslab/jsmpeg#example-setup-for-stream...


Is it that there is no way to “just” run a small custom built daemon script on a Linux box, not just in terms of camera apps but in general?

Like I have this 10 line bash script, that I want it to run for a few years, with no maintenance, that don’t get corrupted from power loss, or lock up from memory leaks, that auto restarts when necessary, but not bring down network either.

And it requires 10+ years of experience in managing and developing on GNU/Linux, a few notable certificates maybe, a bit of embedded background being a plus, to do it right enough, in about 0.25-1.0 man-month.

I’ve tried something like that on Alpine Linux, and it didn’t work always. Mostly due to proficiency issues on me, but it definitely was unnecessarily hard.


Something like that ?

    [Unit]
    Description=auto start script
    After=multi-user.target
    [Service]
    Type=simple
    ExecStart=/home/pi/script.sh
    User=pi
    WorkingDirectory=/home/pi
    Restart=on-failure
    [Install]
    WantedBy=multi-user.target


There are fancier modern things but https://cr.yp.to/daemontools.html still works great for keeping a simple script running.


If I had to do that, I'd go for a read-only rootfs (which IIRC the latest raspbian/raspi-config version supports to set up for you) combined with a systemd unit file that takes care of auto restarts.

The only thing I don't know how to deal with is a memory leak but as long as it's an easy simple script and no Java app, there should not be anything running that could leak memory.


There's a significant amount of parts to put together, software wise, but you can build a strong streaming solution with gstreamer or ffmpeg. I've done so and shipped hundreds of devices. It was not a trivial amount of work. We eventually transitioned to a purpose built dedicated camera solution however. If you're capturing sound, you will also have challenges as the BCM2xxx family has..... interesting audio hardware, especially if you want to reduce your BOM and integrate PDM microphones.


Can ffmpeg work? I've used it with IP cameras before with much success.


MotionEyeOS consistently crashes for me on raspberry pi zeroes. It's just not stable enough. I suppose it's the transcoding to mjpeg that's too heavy. But I'll try some of the alternatives mentioned here.


> limited to mjpeg rather than h.264

In fact I had a hope it can stream h.265 to save bandwidth. I've heard it was supposed to have hardware h.265 encoding.


The hardware supports h.265 decode but not encode.


What about Raspberry Pi 4?


I should have been more specific: the 4 supports h.265 decode but not encode, none of the other Pi hardware has either.



Just wait until you try to include audio in the stream...


I use Raspberry Piz to do the camera work for my robots. But I have a web interface.

http://robot247.io/robot/pibot

I think overall these devices are amazing for lifting video from one place to another.

One day though, i will work on getting h.264 working.


Very cool robot!


I believe you can do almost the same by just running a vlc command in the pi[1]. A complete disk image might be a bit overkill.

[1] https://raspberrypi.stackexchange.com/questions/23182/how-to...


You can, but that solution uses a Raspberry Pi camera, not an USB one. They have completely different hardware interfaces.


Are you sure? The image in the linked article shows a Pi zero with the Pi camera case (not USB).


Sorry, my fault. You are correct.

I implemented a similar solution (with http-based streaming), but using Motion with an old USB camera. I assumed this solution would also use an USB camera.

I still prefer the Motion solution. Streaming through http opens a lot more possibilities.


IP cameras need their Linksys moment. The hardware has advanced immensely but the software is terrible.


I highly reccomend Motion, check it out if you haven't. I've ripped out and replaced subpar proprietary vendor crap with it. (keep the cameras depending on model, ditch the included DVR systems)

[1] https://motion-project.github.io/


Motion is great, but most (all?) consumer IP cameras and DVRs/NVRs are Swiss cheese security wise, they routinely phone home or make connections with servers in China, some even contain hardcoded passwords etc. so that the only way to use them reliably would involve a firewall only accepting connections from trusted sources while preventing the cameras subnet to send anything to the outside world.

What we need is some alternative firmware just like OpenWRT, but aimed at IP cameras. That would be a real challenge, therefore it is probably better to build the IP camera from ground up with security and trust in mind. The PineCube from Pine64 seems a really good step in this direction. Just like their PinePhone, being so hard to liberate an existing phone, it's actually easier to design and produce from scratch one that is really free and open and therefore trustworthy.

Some interesting options also from Friendlyarm to make small Linux driven embedded boards with cameras. https://www.friendlyarm.com/index.php?route=product/product&...


Why isn’t motion + motionEye + raspberry pi just that?


I've used motion and it's fine as the DVR part with MJPEG cameras. Something else is needed for modern h264/h265/av1 stuff. I've started coding on it a few times but have never gotten it functional. But my point is about the software that runs in the cameras themselves.



720p or 1080p wireless/wired/PoE IPCamera with FCC-certification has been on the market for a few years, at the price range of 30ish dollars, you can buy IPCamera modules for about $10 from vendors(mainly in China) and reprogram with your firmware as well.

RPI-based camera is great to learn stuff, but to make a product-like devices, there are too many to choose on the market already, cheap and easy and most likely more robust.

Isn't ip-camera a solved problem?


> Isn't ip-camera a solved problem?

Not if you care about:

- image quality, most of these 10$ camera modules have "sensors" that only work halfway decent with good lighting and are little more than static noise at night

- general build quality, expect issues with the power supply or at temperature extremes

- security, there's no OpenWRT or equivalent for these, so even if you make your own firmware you're still responsible to keep it up to date yourself

- no "advanced" features such as a back-up battery, durable (!) on board storage and a wireless fallback in case there is an attacker who simply cuts the cables


Reolink PoE... Pretty great... like the Unifi of security cameras.


Links? I’d love to just buy the modules and stick them on my own boards.


not really off-the-shelf, you have to find the module sellers but yes it is do-able.


I have used this https://github.com/jacksonliam/mjpg-streamer for a number of years to great success.

However a word of warning:

Wifi streaming video can really degrade your wifi for other devices. especially if the streaming device is at the edge of the range of the wifi.

Ethernet streaming is really the way forward.


Why not dedicate a WiFi channel to the video then? WiFi routers are inexpensive.


Because it's still not a great solution.

I've done a TON of work in this for reasons and can tell you that streaming over 2.4Ghz especially is an anti-pattern unless you're dealing with super low-quality streams and/or only dealing with 1-2 devices. Even 5Ghz can get froggy with traffic contention. Working against a 15fps 1080p stream on 5Ghz is actually how I test streams with poor health and unpredictable behavior. Throw streaming over TCP into the mix and oof.

Also throwing another network component into the mix may not work for most folks.

---

PS: Before someone yells at me for saying "TCP streaming" because "clearly UDP is the right tool for the job!": nope. TCP is the use-case.


Thats a good question

You can, and it solves one of the problems: other clients wanting to use your bandwidth. With careful setup, and a low noise floor (ie not an apartment block) you could do this.

However you have to remember that you can only really run as fast as your slowest wifi camera[1]. That means that in practical terms, you need to make sure that all your cameras are syncing at the highest practical speed (this means not using the on board antenna in most cases)

[1] its more complex than this, but my understanding is that slower devices eat up the available transmit/receive time. which means that everything else is slowed down.


There's a project call rpos that turns a Raspberry Pi into a Onvif Compliant-enough camera that most NVRs can use. Onvif uses RTSP streaming just like this.

https://github.com/BreeeZe/rpos

Either way, OBS supports RTSP so you can pull feeds in and surface them as a virtual webcam to the rest of your OS.


It's a neat project, but doesn't having to connect my computer to the zero's wifi-hotspot mean I can't actually use my own network since I would not be routing my connection through a zero. That would then make this kind of device kind of useless unless I have a dedicated device to watch the stream from? Or am I missing something?


I use Pi Zero W camera kits for security cameras, and they just associate with their own dedicated wifi AP, which puts them behind a NAT firewall and routes to the internet in the usual fashion.

Accessing the cameras then is no different than accessing anything behind NAT on a home network. The AP can have holes punched for port forwarding to the cameras, some kind of dyndns solution could be used to give them a persistent name if the AP's public network address is dynamic. There are other solutions in this space as well...

My preference however is to not punch any holes and not bother with supporting external connections at this layer. Instead I have the cameras establish and maintain ssh connections w/reverse tunnels on an external server having a static IP on the internet. Those reverse tunnels only listen on localhost ports at the server, requiring a locally executed process to reach the camera tunnels. In the case of my server's configuration, that basically requires logging into the server via ssh to reach the camera tunnels. For an authorized user with ssh access to the server, it's trivial to access the tunnels with a web browser by establishing a SOCKS proxy via ssh and configuring a web browser to use it.

I've ~reproduced this setup for some friends/family members, using extremely cheap VPS instances strictly for terminating the ssh tunnels and providing a self-signed cert https proxy w/basic auth to reach the camera tunnels more conveniently using a smartphone's browser. It seems to work fine for them once they get the self-signed cert permanently accepted in their phone's browser, and is more convenient since they're not IT people and won't be running an ssh client anytime soon. The main problem that's come up is reconfiguring their cameras wifi when they upgrade their home network. They forget about the cameras wifi dependency, and by the time they discover the problem it's too late and we're talking usb2serial GPIO console to reconfigure wpa_supplicant time.


I haven't looked at the project yet, but... your comment precludes the possibility of your computer having a wired ethernet connection to the internet.

Then the camera could be placed anywhere as long as it can get power, and your computer could connect to both the camera and the internet.

But, I don't understand why the project would really benefit from the Pi Zero hosting a wifi hotspot... it seems like it would be better for it to just join an existing wifi network.


That step should only be to test and configure the camera. While you're connected you login and configure your own wifi on it.

Preferably that web tool should do it for you.

Either way, a lot of consumer products have that one step.


This is the biggest issue I have, too.

Motion[1] has a lot less resolution and it's video is not good. But, at least, I can access the video stream through http. That makes an huge difference when implementing a solution. For non-tech users you can just stick the camera's streaming webpage to a phone home screen and give instant access to the video stream.

[1] https://motion-project.github.io/


Just a heads up: we have just launched the 1000eyes project on https://1000ey.es that tries to give you ready-to-use Open Source and IPv6 enabled cameras without any configuration needed.


Is it possible to make an IP camera appear on the other end as /dev/videoX with near zero latency? Even with LAN ping times of <2ms I routinely get >200ms of latency with IP cameras, sometimes as much as 1000ms, and can't figure out a good solution.


There were a bunch of people experimenting with using RaspberryPis for low latency drone FPV video. Turns out for some use cases WiFi in it's standard config is a big part of the "getting below a few hundred milliseconds latency" problem. (Note: trying for low latency on 1080p video is making life harder for yourself too. If you don't _need_ 2 megapixel frames, go smaller. 720 or 600 lines might be fine. Drone FPV often uses 525 line analog video.)

I've got a setup I built based on this: https://befinitiv.wordpress.com/wifibroadcast-analog-like-tr... that's well below 100ms if latency. It's still not as good as analog video for fast drones though.

These days most drone people just blow money on the new-ish DJI digital video stuff (which works on non-DJI drones). It's pretty spectacular, but it's over a grand (in AUD) worth of gear so for now I'm sticking with my old analog video gear.


I have to imagine that's mostly due to encoding, right? I wonder what the cheapest format to get a hw encoder for is.


In the past, w/ Axis cameras, I have enabled two streams, the high quality, high res. h.264 with high latency, and a lower resolution mjpeg stream with much lower latency.


You need to tune the buffers on both the sender and the receiver, as well as choose a codec with low latency. It takes some work, but you should be able to get it below a second without too much trouble.


I think you could probably achieve that with v4l2loopback and ffmpeg. I've done something similar in the past and it works.


This is great but why people don't post something more complex than connecting two things together? No disrespect but even basic Lego set would have more than two parts.


Does anyone know if any of these projects capture the h.264 stream encoded inside the webcam? I would be interested in setting up something like the nest paid subscription but on a local server (meaning record 100% of the video and build a nice interface to navigate it. No "detection of movement" kind of thing involved unless for tagging).


Typically cameras just output the H.264/H.265 stream(s) via rtsp, and optionally a video analytics metadata stream. The part that records video from one or more cameras can be separate and is called a network video recorder (NVR). This split allows you to store several cameras' stuff in one place that is better physically protected and in a better form factor for having a hard drive, access a bunch of cameras' data easily under one UI even if they're made by different manufacturers, etc.

I'm working on an open-source NVR: https://github.com/scottlamb/moonfire-nvr that is secure, will run on a Raspberry Pi 4, and has a good recording schema. I wouldn't describe the UI as nice yet, but I'd welcome help in making it so!

There are some other open source NVRs (I see someone mentioned Shinobi). Probably a couple reasonably-priced commercial software options (people like Blue Iris, but it's Windows-only). Some commercial NAS devices have NVR support (eg Synology). Dedicated NVRs from manufacturers like Dahua/Hikvision (but my experience is they're awful). YMMV.


You can run something local like Shinobi. That just requires any rtsp camera. It can even send you push notifications on a discord channel.

https://shinobi.video/


I've been using https://elinux.org/RPi-Cam-Web-Interface. It has an interesting motion detection and record function.


It does have a lot more functionality, but it doesn't support USB cameras. It only works with the Raspberry PI camera.


I do distinctly remember someone changing some config files to make it work with a USB camera. Not exactly plug-and-play but it can be done.


Yeah, I have experienced it too.


As someone who owns a Wii, and uses pcapture... I read this as "pee-pee cam". Probably not a good name for a small camera that can be hidden and run off a battery to stream live video.


How do I create this from a stock Raspbian install?


The pi configuration is at: https://github.com/sepfy/piipcam/tree/master/bsp/meta-piipca...

You’d have then to compile the source.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: