It amazes me, really, that for as popular as this use case seems to be there isn't a simple, well-packaged way to stream a camera off a pi.
I've been trying to get this working lately and nothing works "well". Motion is popular, but is limited to mjpeg rather than h.264. uv4l is closed source and not up to date with current OS versions. None of the various cvlc incantations to do better actually seem to work on my pi zero w, etc.
I’d like to plug my project, APStreamline for just this use case (https://github.com/shortstheory/APStreamline). It supports network adaptive streaming which adjusts the resolution and bitrate of the H264 video streamed from the Pi camera depending on the quality of your network connection. You can then use an RTSP player such as VLC to view the video stream. It works great with other types of cameras too!
Thank you! The list of cameras for which I have added special support are:
* Logitech C920
* Raspberry Pi Camera
* e-Con AR0521
* ZED2 Depth camera (in V4L2 mode)
In case your camera is not one of the above, APStreamline falls back to requesting an MJPG stream from the camera and then encoding it to H264 that using the x264enc software encoder. The software encoder has good quality but it requires more CPU power.
Yes, the way to test your own camera is to connect it to your computer and launch APStreamline. As the sibling comment suggested, feel free to add a GitHub issue if you want your camera to be supported. In case you want to DIY support for your camera, the steps here https://arnavdhamija.com/2020/10/29/apstreamline-v2/ have an outline of what to do.
Yeah wasn't sure if they had a process to it outside of pull requests. As in some other channel or method for hardware. But you're right, pull requests it is they said.
> I've been trying to get this working lately and nothing works "well". Motion is popular, but is limited to mjpeg rather than h.264. uv4l is closed source and not up to date with current OS versions. None of the various cvlc incantations to do better actually seem to work on my pi zero w, etc.
Since you use motion, I suppose you want to have some kind of detection going on.
I was toying with 2 pi0+cam and one pi4 some weeks ago.
Here's my conclusion (I should write a blog post about it):
- use a pi0+camera dedicated to live streaming ; that's the one you want to connect to to see what's going on (fixed IP, RTSP stream)
- use another pi0+camera or add an IR sensor to the first pi0, in the same spot, to detect motion events (either through motion installed on the pi or the IR sensor)
- use a third connected pi4 for continuous recording of the live stream (in chunks of 5 minutes) to HDD/SSD and/or recording of the live feed once a motion is triggered (you can use motion hooks to trigger API on this pi from the pi0)
- you could also simply do motion detection on the pi4 but you are at the mercy of artifacts in the stream and they WILL trigger motion detection ; that's why you want to do motion detection closest to the source
Motion introduces too much latency to use as a two-ine-one "detect and live stream" (MJPEG conversion takes a lot of CPU clock and adds artifacts).
Most motionEyeOS tutorials I read are PoC that makes you install motionEyeOS on every pi and use the motion MJPEG stream instead of an h264 feed from the camera. It also introduces a lot of CPU bottlenecks and unreliable network connectivity.
I found that motion web UI is now enough for live streaming of what motion "sees" but motionEyeOS helps understanding many of motion options. It's especially useful to draw masks. And then you move on to building your own infrastructure with those bricks (http API, live streaming, hl264 streaming, etc.).
It seem this setup get close to the Jetson Nano 2Gb price range ($59), which from my experience is order of magnitude faster than rpi 4 for computer vision/video processing tasks. Is there any advantage of using Motion over nvidia-deepstream?
I don't know much about nvidia-deepstream. Motion is a stand alone and ready to use program that can take multiple streams, detect frame to frame differences with different settings. Nvidia-deepstream seems to be an SDK that requires more specific AI/ML knowledge to set up motion detection.
on the client whenever I wanted to see the camera. It wasn't polished, but it got the job done. It died probably due to SD card flakiness, and then I had to move the camera anyway. Next incarnation will be nfsroot.
I second this, using gst-launch, you can run a gstreamer pipeline in one line in the terminal that generates a hardware h264-encoded stream that you can tune to any bitrate. Something like this:
I'd be love to hear more about how you did it! I found streaming (and restreaming) RTP fairly straightforward but annoying to setup with gst-launch (there seems to be no simple option to get an SDP file out, apparently you are meant to lovingly handcraft one from the debug log).
But I couldn't figure out how to stream RTSP, especially not in a way that would preserve NTP timestamp information when restreaming (for synchronization with other streams). There is gst-rtsp-server, but as far as I can tell the idea is that you use it as a library to build your own custom server code.
Same, have been looking for a slightly different use case: Using RPi as a video capture card that takes an HDMI input and outputs H.264. Even better would have been to implement it as a UVC device that can be plugged in as a video source.
I was surprised that gamers haven't implemented this yet as a DIY Elgato project.
There are several entire companies whose core product is a security camera that cannot get a security camera to work "well". At least 50% of it is making the user experience smooth and loading times extremely low. You would think that the product managers are responsible for this would take responsibility for it and make sure it works perfectly. However, there are very few great product managers out there. Most of them fail miserably and end up giving even the good product managers a bad reputation. At the end of the day, the guy who wrote this could possibly be just a single poor plonker in his mom's basement who never stood a chance against the most discriminating of users like us.
It's not that "well-packaged" but last time I tried it, JSMpeg worked nicely when streaming from an RPi3. It has very low latency (~70ms), reasonable quality/bitrate (MPEG1, rather than MJPEG) and is viewable in any browser.
Is it that there is no way to “just” run a small custom built daemon script on a Linux box, not just in terms of camera apps but in general?
Like I have this 10 line bash script, that I want it to run for a few years, with no maintenance, that don’t get corrupted from power loss, or lock up from memory leaks, that auto restarts when necessary, but not bring down network either.
And it requires 10+ years of experience in managing and developing on GNU/Linux, a few notable certificates maybe, a bit of embedded background being a plus, to do it right enough, in about 0.25-1.0 man-month.
I’ve tried something like that on Alpine Linux, and it didn’t work always. Mostly due to proficiency issues on me, but it definitely was unnecessarily hard.
If I had to do that, I'd go for a read-only rootfs (which IIRC the latest raspbian/raspi-config version supports to set up for you) combined with a systemd unit file that takes care of auto restarts.
The only thing I don't know how to deal with is a memory leak but as long as it's an easy simple script and no Java app, there should not be anything running that could leak memory.
There's a significant amount of parts to put together, software wise, but you can build a strong streaming solution with gstreamer or ffmpeg. I've done so and shipped hundreds of devices. It was not a trivial amount of work. We eventually transitioned to a purpose built dedicated camera solution however. If you're capturing sound, you will also have challenges as the BCM2xxx family has..... interesting audio hardware, especially if you want to reduce your BOM and integrate PDM microphones.
MotionEyeOS consistently crashes for me on raspberry pi zeroes. It's just not stable enough. I suppose it's the transcoding to mjpeg that's too heavy. But I'll try some of the alternatives mentioned here.
I implemented a similar solution (with http-based streaming), but using Motion with an old USB camera. I assumed this solution would also use an USB camera.
I still prefer the Motion solution. Streaming through http opens a lot more possibilities.
I highly reccomend Motion, check it out if you haven't. I've ripped out and replaced subpar proprietary vendor crap with it. (keep the cameras depending on model, ditch the included DVR systems)
Motion is great, but most (all?) consumer IP cameras and DVRs/NVRs are Swiss cheese security wise, they routinely phone home or make connections with servers in China, some even contain hardcoded passwords etc. so that the only way to use them reliably would involve a firewall only accepting connections from trusted sources while preventing the cameras subnet to send anything to the outside world.
What we need is some alternative firmware just like OpenWRT, but aimed at IP cameras. That would be a real challenge, therefore it is probably better to build the IP camera from ground up with security and trust in mind. The PineCube from Pine64 seems a really good step in this direction. Just like their PinePhone, being so hard to liberate an existing phone, it's actually easier to design and produce from scratch one that is really free and open and therefore trustworthy.
I've used motion and it's fine as the DVR part with MJPEG cameras. Something else is needed for modern h264/h265/av1 stuff. I've started coding on it a few times but have never gotten it functional. But my point is about the software that runs in the cameras themselves.
720p or 1080p wireless/wired/PoE IPCamera with FCC-certification has been on the market for a few years, at the price range of 30ish dollars, you can buy IPCamera modules for about $10 from vendors(mainly in China) and reprogram with your firmware as well.
RPI-based camera is great to learn stuff, but to make a product-like devices, there are too many to choose on the market already, cheap and easy and most likely more robust.
- image quality, most of these 10$ camera modules have "sensors" that only work halfway decent with good lighting and are little more than static noise at night
- general build quality, expect issues with the power supply or at temperature extremes
- security, there's no OpenWRT or equivalent for these, so even if you make your own firmware you're still responsible to keep it up to date yourself
- no "advanced" features such as a back-up battery, durable (!) on board storage and a wireless fallback in case there is an attacker who simply cuts the cables
I've done a TON of work in this for reasons and can tell you that streaming over 2.4Ghz especially is an anti-pattern unless you're dealing with super low-quality streams and/or only dealing with 1-2 devices. Even 5Ghz can get froggy with traffic contention. Working against a 15fps 1080p stream on 5Ghz is actually how I test streams with poor health and unpredictable behavior. Throw streaming over TCP into the mix and oof.
Also throwing another network component into the mix may not work for most folks.
---
PS: Before someone yells at me for saying "TCP streaming" because "clearly UDP is the right tool for the job!": nope. TCP is the use-case.
You can, and it solves one of the problems: other clients wanting to use your bandwidth. With careful setup, and a low noise floor (ie not an apartment block) you could do this.
However you have to remember that you can only really run as fast as your slowest wifi camera[1]. That means that in practical terms, you need to make sure that all your cameras are syncing at the highest practical speed (this means not using the on board antenna in most cases)
[1] its more complex than this, but my understanding is that slower devices eat up the available transmit/receive time. which means that everything else is slowed down.
There's a project call rpos that turns a Raspberry Pi into a Onvif Compliant-enough camera that most NVRs can use. Onvif uses RTSP streaming just like this.
It's a neat project, but doesn't having to connect my computer to the zero's wifi-hotspot mean I can't actually use my own network since I would not be routing my connection through a zero. That would then make this kind of device kind of useless unless I have a dedicated device to watch the stream from? Or am I missing something?
I use Pi Zero W camera kits for security cameras, and they just associate with their own dedicated wifi AP, which puts them behind a NAT firewall and routes to the internet in the usual fashion.
Accessing the cameras then is no different than accessing anything behind NAT on a home network. The AP can have holes punched for port forwarding to the cameras, some kind of dyndns solution could be used to give them a persistent name if the AP's public network address is dynamic. There are other solutions in this space as well...
My preference however is to not punch any holes and not bother with supporting external connections at this layer. Instead I have the cameras establish and maintain ssh connections w/reverse tunnels on an external server having a static IP on the internet. Those reverse tunnels only listen on localhost ports at the server, requiring a locally executed process to reach the camera tunnels. In the case of my server's configuration, that basically requires logging into the server via ssh to reach the camera tunnels. For an authorized user with ssh access to the server, it's trivial to access the tunnels with a web browser by establishing a SOCKS proxy via ssh and configuring a web browser to use it.
I've ~reproduced this setup for some friends/family members, using extremely cheap VPS instances strictly for terminating the ssh tunnels and providing a self-signed cert https proxy w/basic auth to reach the camera tunnels more conveniently using a smartphone's browser. It seems to work fine for them once they get the self-signed cert permanently accepted in their phone's browser, and is more convenient since they're not IT people and won't be running an ssh client anytime soon. The main problem that's come up is reconfiguring their cameras wifi when they upgrade their home network. They forget about the cameras wifi dependency, and by the time they discover the problem it's too late and we're talking usb2serial GPIO console to reconfigure wpa_supplicant time.
I haven't looked at the project yet, but... your comment precludes the possibility of your computer having a wired ethernet connection to the internet.
Then the camera could be placed anywhere as long as it can get power, and your computer could connect to both the camera and the internet.
But, I don't understand why the project would really benefit from the Pi Zero hosting a wifi hotspot... it seems like it would be better for it to just join an existing wifi network.
Motion[1] has a lot less resolution and it's video is not good. But, at least, I can access the video stream through http. That makes an huge difference when implementing a solution. For non-tech users you can just stick the camera's streaming webpage to a phone home screen and give instant access to the video stream.
Just a heads up: we have just launched the 1000eyes project on https://1000ey.es that tries to give you ready-to-use Open Source and IPv6 enabled cameras without any configuration needed.
Is it possible to make an IP camera appear on the other end as /dev/videoX with near zero latency? Even with LAN ping times of <2ms I routinely get >200ms of latency with IP cameras, sometimes as much as 1000ms, and can't figure out a good solution.
There were a bunch of people experimenting with using RaspberryPis for low latency drone FPV video. Turns out for some use cases WiFi in it's standard config is a big part of the "getting below a few hundred milliseconds latency" problem. (Note: trying for low latency on 1080p video is making life harder for yourself too. If you don't _need_ 2 megapixel frames, go smaller. 720 or 600 lines might be fine. Drone FPV often uses 525 line analog video.)
These days most drone people just blow money on the new-ish DJI digital video stuff (which works on non-DJI drones). It's pretty spectacular, but it's over a grand (in AUD) worth of gear so for now I'm sticking with my old analog video gear.
In the past, w/ Axis cameras, I have enabled two streams, the high quality, high res. h.264 with high latency, and a lower resolution mjpeg stream with much lower latency.
You need to tune the buffers on both the sender and the receiver, as well as choose a codec with low latency. It takes some work, but you should be able to get it below a second without too much trouble.
This is great but why people don't post something more complex than connecting two things together? No disrespect but even basic Lego set would have more than two parts.
Does anyone know if any of these projects capture the h.264 stream encoded inside the webcam?
I would be interested in setting up something like the nest paid subscription but on a local server (meaning record 100% of the video and build a nice interface to navigate it. No "detection of movement" kind of thing involved unless for tagging).
Typically cameras just output the H.264/H.265 stream(s) via rtsp, and optionally a video analytics metadata stream. The part that records video from one or more cameras can be separate and is called a network video recorder (NVR). This split allows you to store several cameras' stuff in one place that is better physically protected and in a better form factor for having a hard drive, access a bunch of cameras' data easily under one UI even if they're made by different manufacturers, etc.
I'm working on an open-source NVR: https://github.com/scottlamb/moonfire-nvr that is secure, will run on a Raspberry Pi 4, and has a good recording schema. I wouldn't describe the UI as nice yet, but I'd welcome help in making it so!
There are some other open source NVRs (I see someone mentioned Shinobi). Probably a couple reasonably-priced commercial software options (people like Blue Iris, but it's Windows-only). Some commercial NAS devices have NVR support (eg Synology). Dedicated NVRs from manufacturers like Dahua/Hikvision (but my experience is they're awful). YMMV.
As someone who owns a Wii, and uses pcapture... I read this as "pee-pee cam". Probably not a good name for a small camera that can be hidden and run off a battery to stream live video.
I've been trying to get this working lately and nothing works "well". Motion is popular, but is limited to mjpeg rather than h.264. uv4l is closed source and not up to date with current OS versions. None of the various cvlc incantations to do better actually seem to work on my pi zero w, etc.