So unless I switch to the >=EUR6.00/month Plus plan I cannot add a non admin user? So my grandma gets admin privileges? Is that not a blocker for a lot of families?
> PhotoPrism® Plus includes advanced multi-user functionality and additional account roles. These roles are intended for situations where you want other people to have access to your library, such as giving family members access to your pictures without granting write permissions or exposing private content.
> It is recommended to set up additional instances if you have multiple users in a family, so that everyone can manage their own files independently. This way you can avoid problems with conflicting library settings, file permissions, and dealing with duplicates.
So you're actually supposed to run an instance per person it seems. But still, then my grandma would still be an admin. I think I'd like to about that.
> So unless I switch to the >=EUR6.00/month Plus plan I cannot add a non admin user? So my grandma gets admin privileges? Is that not a blocker for a lot of families?
Maybe, but at least it's not per-user, per-month. I'm much more likely to set up something like this if I can add / remove family members as needed without thinking about the cost beyond the annual fee I pay for to run my own instance.
The blocker for me is typically ease of use. They recommend PhotoSync for transferring files to the instance. I currently use that for my family, but sync to a Nextcloud install (which I desperately want to replace). The part where it fails for me is that my family members don't understand how to verify everything is running as intended. For example, I checked one of their phones earlier this year and they had 5k un-synced photos.
What this really needs to be useful is a native app that does sync from the phone in a way that makes it relatively foolproof.
Beyond the ease of use, every link in the chain of the workflow is a point of risk IMO. For example, what if PhotoSync rug pulls everyone (I have no reason to believe they would) and starts charging a subscription? The app that runs on your phone and does the syncing is the more valuable half of the workflow IMO.
After thinking about it some more I now think you're right, at least it's a flat fee that doesn't go up as you add more users. And it's not that high a cost. I'll give it a try.
And I'll pay close attention to the reliability of the sync. I've been using Syncthing for a while now and am still not trusting it completely. Quite regularly I have to restart a mobile app for the files to get synched and I just can't figure out what's wrong. I don't want that to happen for photos. Maybe I'll monitor the date of the latest photo per user and alert if it's too old.
Freemium pricing is always a tricky balance. Your free tier should provide a modicum of value to entice people into installing your software, but ideally, it will also incentivize the users that are getting value from your efforts to remunerate to a degree that is reasonable and comfortable for them.
If you’re using it for reasons other than personal use, like family use, then a heavier set of administrative options would be necessary. and therefore pricing will be a part of the equation.
Can it remove duplicates? That’s the holy grail along with storing my images. I’ve got so many damn photos, and I want to reduce the total amount I have but going through them is so daunting I’ll never do it without a computer-assisted organizer.
Unfortunately, if you've used Google Takeout or other systems that can both downsample your photos and videos, as well as actually deleting _or changing_ metadata, deduplicating becomes a big wad of heuristics.
My first approach was to build a UID based on a series of metadata fields (like captured-at time, the shutter count of the camera body, and the geohash), but that breaks when metadata is missing or edited from their sibling variants.
Just finding what assets to compare against by time proved to be problematic in some edge cases. By storing in UTC (which is how I've built systems for the last thirty years!) made any file that didn't encode a time zones be wrong by whatever the correct offset was--almost all videos, and many RAW file formats don't have timezones. The solution I came up with was to encoding captured-at time _in the local time_, rather than UTC. I also infer or extract a milliseconds of precision for the date stamp, so files with more or less precision will still overlap during the initial captured-at query.
My main point here is to expose how deep this rabbit hole goes—I'd suggest you think about these different scenarios, and how aggressive or conservative you want to be in duplicate aggregation, and what you want to do with inexact duplicates.
Most importantly: have full offline backups before you use any deduping or asset management system. When (not if) the tool you're using behaves differently from how you expected, you won't have lost any data.
And if you just want to go by the pixel data, look into "perceptual hashing". https://github.com/rivo/duplo works quite well for me, even when dealing with watermarks or slight colour correction / sharpening. You could even go further and improve your success rate with Neural Hash or something similar.
is there an option to just calculate image hash (but on image data, not the full file of image data + metadata) without any transforms? So that if it matches you can be 100% certain it's the same image
Unfortunately, (almost all!) image hashing don't detect color differences--they map images to greyscale first. This may be fine for many situations, but it will return the same result for a sepia tint, a full color original with incorrect white balance, and the final result you made after mucking with channels for a couple minutes.
I also found that there really isn't one "best" image hash algorithm. Using _several different_ image hash algos turns out to be only fractionally more expensive during both compute and query times, and substantially improves both precision and recall. I'm using a mean hash, gradient diff, and a DCT, all rendered from all three CIELAB-based layers, so they're sensitive to both brightness and color differences.
If software is deduplicating images by SHA including metadata, it's missing a lot about photography workflows :) So, huge props (and I mean that - so many tools get that wrong!) on your approach of trying to identify asset groups and trying to ID members via a number of heuristics.
If you want to go deeper, I suggest grouping everything by image content - that means, at the very least, comparing images via a resolution independent hash. (E.g. average down to e.g. an 8x8 picture in greyscale for the simplest approach). (undouble does that nicely, and has a number of different approaches: https://erdogant.github.io/undouble/pages/html/index.html)
Certainly, but it depends on your goal. Models like CLIP (https://openai.com/research/clip) are designed to help describe images with an embedding, but are trained to give similar embeddings for similar image _content_, not for similar _pixels_.
Simple image hashes based on small thumbnails can do similar image detection quite well (see sibling comments for links to several different approaches).
I used libraw to read the actual raw data from my images, ignoring possible metadata that can get changed/updated by Capture One for example. The raw data is just fed into a hashing function, to get an exact content hash. Does not work if your image is down-sampled of course, but that was actually my goal - I want to know if the raw content has changed/bit flipped, but don't care if the metadata is altered or missing.
Ah! In this case, you might be happy to discover that ExifTool just added an "image content" hash--it's the SHA (or MD5 or whatever) hash of the _actual image bytes_, after ignoring the metadata header (and possible footer) payloads.
So if you, say, change the rotation of your RAW image file (which normally will just add or change the EXIF `Orientation` tag), the image content hash will stay the same.
His tool is in rust, so he can directly call the libraw function. ExifTool is a perl tool, so it'd be an external process that he'd need to coordinate with.
Deduplicating by SHA, or by exactly identical metadata, sounds so rudimentary in 2023... 10 years ago VisiPics was already doing an excellent job to help find similar or identical images in a collection, based on visual similarity!
> Unfortunately, if you've used Google Takeout or other systems that can both downsample your photos and videos, as well as actually deleting _or changing_ metadata, deduplicating becomes a big wad of heuristics.
I thought takeout exported the original files? (alongside metadata entered into the application)
I used to use DupeGuru which has some photo-specific dupe detection where you can fuzzy match image dupes based on content: https://dupeguru.voltaicideas.net/
But I switched over to czkawka, which has a better interface for comparing files, and seems to be a bit faster: https://github.com/qarmin/czkawka
Unfortunately, neither of these are integrated into Photoprism, so you still have to do some file management outside the database before importing.
I also haven't used Photoprism extensively yet (I think it's running on one of my boxes, but I haven't gotten around to setting it up), but I did find that it wasn't really built for file-based libraries. It's a little more heavyweight, but my research shows that Nextcloud Memories might be a better choice for me (it's not the first-party Nextcloud photos app, but another one put together by the community): https://apps.nextcloud.com/apps/memories
Full agreement, Immich has a similar problem. I don't know at what point basic systemd services stopped being enough but docker is usually a non starter for me.
I dunno, I enjoy not having to install 500 libraries on my system just to test an app. Also upgrading those libraries without butchering my system is also nice. Not to mention rebuilding a system is really fast. Too many pros outweigh the cons
90% of my services work fine without Docker. Mastodon, Lemmy, Peertube, Caddy, Forgejo, Jellyfin, and Plex. Immich also has a way of installing w/o docker from source but it isn't documented, I don't mean to single them out because the app is great otherwise.
A developer must have had a working config at some point to create the Dockerfile. Providing literally only the Dockerfile is usually just a sign of throwing hands up and saying "It's too hard!".. you should be able to package for at least one platform that isn't Docker. That's just app development, or at least it was until recently.
I disagree, as I have spent way too many hours fighting ancient packages in ubuntu repos at work. Image processing is very difficult when limited to the available versions in ubuntu or Fedora. I do wonder how mastodon or Plex does it. Plex is a paid product though.
Fair enough, maybe image processing is just something I have not dealt with much and it is no coincidence that the two image hosting services are the ones with Docker oriented processes.
You can always build and tag your own docker image using the docker files included from source. Or simply follow the docker files as install instructions:
I'm not sure if you perused the docker files for this repo, but imho there is nothing simple about them
Edit:
I was curious so I dug into them a bit and found that the dockerfile references a develop docker image. There is a second docker file to see how that base image is built. In the steps to build that base image, we grab some .zip files from the web for the AI models. We install go, nodejs, mariadb, etc for quite a few deps, and then there is also a substantial list of packages installed. One step also does:
apt-get update && apt-get -qq dist-upgrade
Which seems a bit iffy to me. Each step calls a script which in turn has its own steps. Overall, I'd say the unfurled install script would be quite long and difficult to grok. Also, I'm not saying any of this is "bad," but it is complex.
They're both kinda dumb though. Updating will create a new layer, but the old binaries will still be a part of the image as part of the history.
The only correct way is to either rebuild the base image from scratch or just fetch a new base image.
My suggestion would be the latter, just run docker pull again for the baseimage and use that, without running update.
Docker is the one thing that works on MANY flavors of Linux.
If I want to provide a tool I want to spend my time on building the tool, not building an rpm, a snap, a deb, ...
The Docker build process is significantly easier. For example, I can just pull in NodeJS 20. I can't do that on Ubuntu. It's not available on packages.ubuntu.com.
Building a deb/snap/rpm is a whole other language to understand how dependencies are set up.
And then I need to test those. I've never even ran CentOS.
I’m assuming openjdk is versioned correctly and the name of the opencv package is opencv. I’m also assuming openjdk, replace openjdk with your jdk package name.
You would put this line in your rpm .spec file. Do you really think the above line is hard? Maybe the difficulty you have is in never have touched rpm. Start here: http://ftp.rpm.org/max-rpm/index.html
Yes that is already way too much extra work. And when those versions are not yet available to depend on? libheif, libaom and libraw needs latest versions and be built together.
That is an implementation detail, not a problem with rpm spec. When comcast builds rpms, the entire chain of required software is built with it. I should add: it is trivial to make integration tests to catch exactly what you describe. If the software is not built internally then the infra team ensures the required repos exist and dependencies match up to official repo rpms.
None of that has to do with the difficulty of rpm spec, but entirely with organizational planning. You do plan while building software.. right?
Also github does the exact same thing but with deb. It works in the real world. Quite well, too!
I generally don't either, but I wonder: are you more comfortable with running a docker image without internet access? You can firewall your host so the container can't access it and assign an internal network to the container.
Genuine question: what do you consider "trusted" code/apps? What difference is there between compiling from source and using the prebuilt official Docker image?
Yup! Worked great for the few months I used it. I think it's kinda funny how much simpler the Nix package is when compared to upstream's dockerfiles lol
Yuck. Every time my system has some random binary that doesn’t seem to resolve to any expected location it’s because it’s installed with flatpak, snap or appimage (why 3?!).
What percentage of people install this on their own computers?
Docker is much less manageable locally than, say, systemd or supervisor.
The only thing (which is something) that docker has going for it is that it's cross platform. But the same argument can be made for writing apps using electron. If you're going to do that, fine, but acknowledge the compromise. It's not better.
If portability is a project goal, maybe pick a language that is actually portable (which python is not). This is almost exactly the argument for Electron: I want a portable app written in js + html, therefore I need to include a js runtime (i.e. chromium + node) with it.
Some features are only in the paid version which is fair enough, but when I tried it a few months ago there was a small but permanently visible message on the app reminding me this was the free version. That was annoying.
If I am going to host something myself and not pay i'd like to do so without a "this is the free version" reminder all the time. At that point I'll just keep paying for my Apple One subscription.
They have a contributor licence agreement that allows them to relicense incoming contributions, so they can release the paid version under a proprietary licence, if they so wish.
> Wouldn't that source only need to be available to those who purchase the paid version?
Yes but I am trying to see if there is a better solution to the red hat freeloader problem.
To spell it out,
What is to stop Oracle (doesn't make sense but couldn't come up with a more evil name) from purchasing a license and releasing IronPrism or whatever by doing s/PhotoPrism/IronPrism?
I deployed this recently and added a bunch of pictures to it. PhotoPrism prominently features an AI-generated (?) 'description' on each photo, but for 98% of my photos, it was unable to come up with any description. The install procedure is needlessly complicated, there's no good reason for an app like this to require docker.
I disagree with the contention that docker makes installs more simple. I would argue that it only makes installs more simple if you are already setup with Docker, and only when things go 100% correctly.
When I see a software application that recommends Docker for deployment, I always assume that I'm going to need to do extra work to make it behave correctly. Whether that is figuring out how to make it auto-start, or how to forward network ports, or how to share a data partition with the host OS.
Non-docker installs are simpler. At least, for my skill set.
Can't you just copy what the Dockerfile is doing bud? Plenty of home server enthusiasts love docker for the simplicity it brings to handling a bunch of software. Not trying to knock you, just wondering where the complexity is coming from for you.
Sure I can. But it's not always that simple. Let's look at the repository for the software discussed in this thread [1]
I see one dockerfile and 7 docker compose files (.yml)
The dockerfile does not apparently do anything useful. I'd be amazed if running that dockerfile by itself produced anything useful
Now, I don't know very much about docker compose, but I learned a bit of it in order to get this software running on my server. If I worked at it, I could almost certainly get a working install of PhotoPrism without using Docker, but it would be annoying work, and I wouldn't have any certainty. I wouldn't know that it was correct, and any time something didn't work the way I expect, I would worry that I screwed something up during the installation
Not to mention the added operational complexity involved in managing a dockerized application compared to managing e.g. an equivalent webapp deployed without containerization (systemd service file, configuration file, etc)
Building From Source
You can build and install PhotoPrism from the publicly available source code:*
git clone https://github.com/photoprism/photoprism.git
cd photoprism
make all install DESTDIR=/opt/photoprism
Missing build dependencies must be installed manually as shown in our human-readable and versioned Dockerfile.
I haven't been able to find any dockerfile that lists dependencies. My guess is that the documentation is referring to some prior architecture or something similar
And the word "Dockerfile" is a link to a nonexistent page... they definitely skipped their grep + sed update homework when moving things around.
Noticing the broken link points to photoprism/docker/develop/Dockerfile, I supposed they had moved it and indeed, just by going one parent up, the directoy photoprism/docker/develop/ contains subdirectories for lots of base systems, each one containing a Dockerfile that lists dependencies needed for each of them.
Oh yea but if it doesn’t recognize a face then you can’t do anything. I have plenty of photos where the face is right there, but photoprism just didn’t recognize that there’s a face, and I can’t manually point at it and say “add a face, name it xxxx”
It is fully opensource, the opensource code contains also the logic to verify a serial key that you will be given if you purchase the license and if verification succeeds the code will enable the extra features.
As such you could fork it, tweak the code around the license verification to always return TRUE or whatever, and you would be running a "pro version" of the software.
The developer simply trusts that people who like the software will purchase a license, or the fact that the majority of people out there are not programmers and would not be able to rebuild the software for themselves. Besides, the product is also offered as a SaaS.
for those looking for an open source way to sync photos, syncthing has an android app that works well. While it is 'always running' on my android phone, I love how as soon as I arrive home it connects to wifi and moves any new photos to my home linux box (which is also running syncthing).
Immich is very much alpha, that's why there are warning banners all over their docs. But at least it's in development - the single PhotoPrism dev rarely accepts PRs, so it hasn't had significant new features in ages
that demo site looks very promising, the feature set is really quite exhaustive already at such early development stage; comes closest to full blown google photos self hosted replacement.
would be cool if also had dynamic shared albums as google photos has with facial recognition
I looked at a bunch of these 2 years ago and ended up using PhotoView for a private gallery. It had the right mix of simplicity and features, and I was actually able to get it running.
This seems to be light on the details for the p2p / decentralized part. Anyone have more details? Does this use DHT or a blockchain or how are they doing that?
I'm surprised nobody has pointed out how terrible its selling feature - its AI - is. I tested it out and it was practically useless at classifying my images. Simple things like an obvious image of a cat it misses tagging as a cat so it isn't searchable.
Did I conceivably mess up the setup or is this other's experience as well?
Google photos AI is so far ahead of Photoprism it isn't even a contest unfortunately. Which sucks because I am more than willing to pay for a viable alternative (still looking...).
I have recently downloaded acdsee's photo thing to test this, but have not gotten around to running it and doing the things - but maybe it's partially there?
acdsee.com/en/products/photo-studio-ultimate/
It looks interesting; I might be in their target group.
But after ten minutes of browsing around, I still have some open questions regarding privacy. What would I have to do to have an absolutely, unequivocally local-only install? Can I even do that? Which features would I need to disable? Especially anything related to AI and classification raises red flags in that regard.
While it is apparent that they've given a lot of thought to privacy and while their privacy policy is definitely one of the better ones I have seen, it still conflates things such as website access with usage of the tool itself.
It would be nice to have one clear, guiding document that outlines how private a private install really is.
AI recognition is local. You can totally disable external access and it will run fine (obviously assuming your photo storage is local).
Their recognition model is loose. It works and it works well enough for what this is to be useful but it's just a quick classifier. It will absolutely have misses.
You can optionally aadd in a coral dev board to save yourself some cpu load but I've never found it necessary.
Is the non-accelerated model also a quantised version? The EdgeTPU is very efficient, but you can get a significant performance drop unless you take care when converting. Modern CPUs can do classification fast enough that you could do background processing and unless you have really tens of thousands of images, you'd be done in an hour or two
Yep. I like the design. I like the little globe that shows where your pictures come from. It is all neat. I actually would pay money for it, because right now I just watch pics/vids from trips manually. I would not want to install another data hoover ( especially of something as intimate as my family pictures ) unless I could be relatively certain data stays where it is.
Also check out PiGallery for a more light weight solution https://github.com/bpatrik/pigallery2 (with less features of course, but it may work if you only want an online gallery)
Can you share how you are hosting it, photo library file types or size, and what the performance is like? I'm always skeptical of stated requirements and what other people are prepared to put up with.
I've got an older Synology NAS that currently stores my 250GB photo library with a lot of RAWs. While it can run docker I'm not going to attempt to run it there. I'm wondering if a VM on my 10700k is going to be sufficient for great performance, or if handing over the complete machine is going to be necessary. I'm happy adding more NVMe 3.0 drives for storage.
I'm also curious how version upgrades have been for you. I want to get out of the ops business but I feel like I'm going to be dragged in.
My photo library is incredibly simple compared to yours. Maybe 15 GB of a mixture of tiff, raw, and some PNG.
I would say my requirements are nowhere near yours and that you would really need to test it yourself for your usecase. Some people on the photoprism forums claim it can handle large libraries like yours but I would always recommend testing yourself.
I host it via docker on a pretty tiny VPS with 3 vCPU and 4 GB RAM and it's fine.
It probably depends on what you mean by "de-google", but if you simply do not want to host your data at Google or depend on Google services, it's quite easy.
As far as I can tell, the only place you really need to be logged in to a Google account to get a "normal" Android experience is the Play Store, which makes sense.
I use Google Maps for example (Osmand when I can, but Google Maps is the only free app that has good traffic information) while not logged in, but my photos are hosted by Photoprism, my emails are not at Gmail, my backups are on my home server, etc.
I find myself largely independent from Google and if that's your goal, as opposed to trying to hide yourself from Google (which would be better, but I find it unrealistic for my needs), it's easy to achieve on an Android phone and certainly easier than independence from Apple on an iPhone.
If you want google-free experience then Play Store is not that important, as (1) it hardly can be said to be degoogled (2) most apps there would want GCM for a start, so you connect to G. even is using microG. Fdroid should be your best friend.
To add to that second point, stock android is tied to google still. last I checked, it would phone home on internet connectivity check, I think hal but I could be wrong, and a few more.
But, if you use grapheneos, they replace the google services with their own alternatives and sometimes let you disable the feature entirely. Also, on any rooted android. You can use AFwall+ to firewall any google connection or app. This way, even system services can't make network requests.
I tried for a while. You can use a ROM based on AOSP with microg services in place of the google binaries. It was about 80-90% of what the normal android experience is. I finally got frustrated with it and reinstalled with the Google binaries, but it is doable.
You definitely can. I use GrapheneOS and F-Droid as the app store. I guess the phone itself is from Google but other than that nothing. I even get OTA updates asap. I use NewPipe instead of YouTube, my own Nextcloud instead of Google Drive etc.
I gave this a shot a while ago and it looked promising. I particularly like the ability to identify pictures, something that Google Photos seems to be slipping on. (*)
The deal breaker for me was that there was no way to share the entire inventory of my photos without providing admin access. Does anyone know if that feature has been provided?
(*) I used to be able to find some pictures using simple keywords like "dog" but Google finds far fewer pictures with those keywords any more. OTOH, it is amusing to see the interpretation of some of my photos in Photoprism, but I suspect that will get better with time.
Am I the only one who is amazed by this technology?
> Please don't upload photos containing offensive content. Uploads that may contain such images will be rejected automatically.
> Non-photographic and low-quality images require a review before they appear in search results.
How does it know what is "offensive"?
Is this configurable?
I don't want to upload photos of people (for obvious reasons) but other than that,
does it even know the contents of a photo to tell if a photo is offensive?
Is a photo of a shelf at a grocery story selling beer offensive?
Where is your quoted text from? I couldn't find it on linked github.
Edit: Okay, found it. I think it's likely that those warnings are only for uploading on their demo server linked on github and not on the locally hosted product.
It is a setting you can choose to enable when you install it. It judges “NSFW” images locally and automatically tags them as private, and you can choose to mark them "public" (as in they will appear in searches and albums and the like), they appear in your search. Or you can just choose not to enable that when you install it and it wont run that analysis. I have it turned on just in case something gets mixed in with my regular photos. I’d say 80% are false positives completely random pictures, or of bikini pictures or my gf laying on the couch in short shorts. But it does catch the occasional “private” image, so I like having it on. I also mark some photos private for various reasons, so I can pull up and show people my vacation photo album without worrying.
The review section is not for NSFW things. It is for low resolution or low information images. Again, it runs locally. As far as I know, it is to keep your library from being clogged up with crappy thumbnails or screenshots or other junk. If you don’t care, you can just bulk approve them.
It also does facial recognition locally. You aren’t giving up any privacy, which is why I use this after resisting Google photos and just doing the folders method for years.
1. This is a feature of the base software, largely inherent to the core feature (see below)
2. Is is configurable and I believe mostly off by default, depending on install method
3. It is lit up in their demo instance
4. It is not a bespoke content filter, this is an `AI POWERED` app that classifies and labels photos so that they can be indexed for text search. Everything is processed through a NASNet and labeled. The upload filter just does some pretty rudimentary heuristics on the labels and decides if it will allow or quarantine the upload. To be clear, It is all server side-- the image is uploaded, its just quarantined.
Haha I saw a few hours ago immich on top of HN, wanted to tell everybody I'm using photoprism + syncthing for quite a long time now and I see that photoprism made it to the top , well, good !
I've been using Photoprism for perhaps 18 months, primarily as a basic way to categorize and browse my photos. Not a power user by any means but I found it easy to use and upgrade over time. I had a couple small issues like image deletion and easy sync to s3, which may have been addressed in recent builds. I actually need to get back into it to upload my recent summer vacation pics. Overall I was a happy user and the tool met my needs for a photo catalog that I control and run locally.
I used PhotoPrism some time ago when it was slightly less featured, but wasn't satisfied with it. The dealbreaker for me is auto mobile backup. PhotoPrism relies on a third party app to "sync" photos, which isn't the same as backing them up. With that said, PhotoPrism is all the rage in the self-hosting community and I can definitely see why.
Conversely, Synology Photos (which I use instead now) has a fantastic mobile app. However, if you really want reliable and granular object/face recognition, the Syno app is a little bare bones. It does some face recog but that's really it. However, its backup feature is reliable and I don't have to worry about if it's working in the background or not. I tried Nextcloud's photo plugin for a while too, but its mobile app had issues and I just couldn't rely on it.
I'm happy for now with Syno Photos but it would be nice to have my photo app in my container environment with everything else I run and just use the NAS for media storage like I intended to.
The Synology Photos UI feels much more polished and it includes a mobile app. Synology Photos only has community mobile apps, which weren't well maintained. And last I checked Photoprism did not have multi-user support. Synology Photos feels much more like something I could get my partner and kids to adopt.
The downside is that Synology Photo's database schema and API are not officially documented, but you can find people who have documented them and since it's on a machine you own you have unlimited access to them, so it's still a big step up from Google Photos and Apple Photos which are chock full of restrictions - e.g. Apple Photos doesn't even have a web API.
It's not a proper comparison, because I have ancient ds115, but Syno's app is really barebones. I tried it on Xpenology too, just to be sure, but overall it's... meh. PhotoPrism was way better compared to it.
Having it running is more complex than to be put as: All you need is a Web browser and Docker... This makes it sound like average users make use of Docker everyday.
Somewhat related: anyone knows a service/tool which creates short movies like an iphone does automatically?
I have a nas to which I automatically upload all my phone and DSLR photos/videos. I'd love to see automatic montages every not and then with some animations/music etc. without having to do them myself...
I've tried a couple of self-hosted solutions to accompany my use of Google Photos in case Google ever decides to pull a stunt with Photos.
Nextcloud was basic, but serviceable. I liked that it handled docs, contact sync, etc. The sync was clumsy and I found updates frequently broke the system.
I just tried PhotoPrism with my ~75GB photo directory. The classification is.. decent and all run locally. It took around a full 24 hours to index and classify my photos, which it did reasonably well. Google Photos' classifications is miles better to be sure, but PhotoPrism was easy to set up and works pretty well.
I no longer expose any of my services outside my local network (besides Wireguard) so I'm hoping to find a Photosync system that will only upload when on my home wifi and charging. Any suggestions there?
Nextcloud is now pretty full featured google photos replacement as well with the Memories and Recognize apps. It will create face photo albums, tag images with objects, videos with activities, and audio with genre. Though not as good as Google. I've run it for maybe a year now and no updates have broken anything.
I don't use docs/contacts or even the official photo sync though, I just sync files through Foldersync on Android. That can easily be set to only upload on certain wifi.
I tried this about a year ago and it was alright. I wanted to like Photonix, as it was a python library and I expected would be easier to hack, but I couldn't lie to myself. PhotoPrism is very clearly and very obviously the most mature of the available projects for this.
I use this but would love something that’s better at tagging content and face recognition. Photoprism misses the mark often, and it’s also not very easy to fix mistakes.
Edit: for example, just yesterday I uploaded ~30 photos of my girlfriend. It recognized her in one photo and didn’t even identify her face as a face in the rest.
I also don’t understand what’s doing the processing for the facial recognition. Is my cpu doing it or is an external service doing it?
Because the zip file only makes the photos harder to work with and probably takes more disk space than if they were unzipped (since you fundamentally can’t compress an already compressed file).
I run a little Photoprism instance on my NAS as a self-hosted complement to Google Photos.
It's a... passable alternative to Google Photos. I downloaded it beforee they started advertising AI and decentralized features - I'm kind of surprised they took this marketing direction.
I ran an instance of this for about a year. It is super cool and pretty easy to manage. It was more than I needed - I was using it as just a photo backup and then a way to share photo collections with people - and pretty expensive to run on digital ocean.
107k photos here, and it's not great. I just clicked a month in the calendar and getting just the json back took 1.8 sec. Last of 12 images came in 2 sec later. Postgres, SSD.
Plus: all thumbs of non-square images are cropped, and it makes all this per source image:
The UI doesn't change the url to the photo you're looking at, so you can't share urls.
Images aren't links, so you can't do browser stuff like open one in a new tab. The back button doesn't work right at all.
The choice of metadata to display under each image in the listing is downright silly ('place', city, year, DoW, month, year again, time, zome, camera model, resolution, file size, city again, state, country).
The information scheme features 'albums', 'favorites', 'moments', 'labels', and 'folders', as if they were being forced to match a bunch of legacy systems. I want a powerful tag/group mechanism that I can use for many possible workflows.
The list under 'search' says 'Review 40193', clearly telling me I have that many pics to review. Clicking on it says:
No pictures found
Try again using other filters or keywords. In case pictures you expect are missing, please rescan your library and wait until indexing has been completed. Non-photographic and low-quality images require a review before they appear in search results
(And I had to disable some user-select:none styles to copy paste that)
If public open is the default (I'm going to give benefit of the doubt and assume it is not) then that is very bad design and would make me question if I trust the devs much at all.
But yes, never set media hosting to open upload unless you want to become an image host for content people can't host elsewhere (or just for images used in spam campaigns).
> User Management
> Account Roles: Super Admin, Admin
So unless I switch to the >=EUR6.00/month Plus plan I cannot add a non admin user? So my grandma gets admin privileges? Is that not a blocker for a lot of families?
Edit: From https://www.photoprism.app/plus/kb/multi-user
> PhotoPrism® Plus includes advanced multi-user functionality and additional account roles. These roles are intended for situations where you want other people to have access to your library, such as giving family members access to your pictures without granting write permissions or exposing private content.
> It is recommended to set up additional instances if you have multiple users in a family, so that everyone can manage their own files independently. This way you can avoid problems with conflicting library settings, file permissions, and dealing with duplicates.
So you're actually supposed to run an instance per person it seems. But still, then my grandma would still be an admin. I think I'd like to about that.