Hacker News new | past | comments | ask | show | jobs | submit | IvanK_net's comments login

I am the creator of a photo editor www.photopea.com, used by around 1 million people every day (of which 300,000 use Android, 150,000 use iOS). After reading this article, I am so happy that I never spent a single hour trying to publish my tool at the "mobile stores" :D


Amazing tool, use it every week! Buuuut reading the main article it just sounds like the autor didn't want to take responsibility for their games. Like it's even required by law to provide your address, email and so on in Germany if you only own a website. Continuing, it's to me very obvious to target the latest Android releases. I don't see the issue this person has.


many obvious reasons (I'm in the US and the last thing I want to provide to the world is my phone number for MORE stupid spam. My address is absolutely out of the question, not in these times. It sounds alarmist until you get physical death threats from people that may or may not have to even do with your app).

But for reference, this author is in the Neverlands and things that seem trivial for US businesses are apparently more involved over there. These small 30 euro costs aren't worth it to update some 10 year old app you'd only update if you just needed some trivial migration updates.


This is only for commercial websites.


Sure but it's considered a commercial website pretty quickly. It's enough to recommend something to make it possibly commercial.


Can't thank you enough for photopea. Very good (and impressive) job that you've done with it! It's very useful as a frontend developer that wants to quickly edit an image!


Just a quick thank you for making and running photopea. Back when ChromeOS didn't run anything but web apps, Photopea was a godsend. I haven't used or thought about it since but I'll check it out again if the need ever arises.


Like the others, thank you! We would use it in our introductory collage class in College (per the professors recommendation)


you're one of my heroes :)


Lossy compression is possible in PNG to a certain degree. This is the "same PNG", but 94 kB instead of 1015 kB:

1015 kB: https://sidbala.com/content/images/2016/11/outputFrame.png

94 kB: https://www.photopea.com/keynote.png


(What's happening here is a reduction of the color count: https://en.wikipedia.org/wiki/Color_quantization)


Thats not PNG being lossy, the image was preprocessed to lose information by reducing color count before being compressed with PNG.


That is what a "lossy compression" means. Instead of encoding the original image, you look for a similar image, that can be losslessly encoded into a smaller file.

In PNG, you look for an image with 100 - 200 colors (so that colors repeat more often) and losslessly encode it with DEFLATE.

In JPG, you look for an image with with a reduced description through DCT, and losslessly encode it with Huffmann trees.


No thats image preprocessing to lose information before compression. JPG would be smaller if color count was reduced as well.

If color quantization using a specific algorithm was part of PNG you could say its lossy compression, its not, PNG is not lossy compression by any definition. Your specific image processing pipeline might be lossy, based on various tools you ran the image through or equipment used to capture, PNG is not.


Lossy compression is possible in PNG to a certain degree. This is the "same PNG", but 94 kB instead of 1015 kB:

1015 kB: https://sidbala.com/content/images/2016/11/outputFrame.png

94 kB: https://www.photopea.com/keynote.png


PNG is not lossy, your example uses a preprocessor to quantize colors before being compressed with PNG.

PNG loss no information from the image being fed to it, the loss was done before creating a simpler image.


I know that the "lossines" is not included in PNG encoders, as it is in JPG / WEBP encoders. But the idea is the same: if you are ready to "accept some loss" (by reducing the number of colors in this case), you can get a much smaller PNG file.


You can get a smaller uncompressed bitmap by reducing color count too reducing bits per pixel, that does not mean bmp is a lossy compression scheme. Color quantization will reduce the final size of the image in nearly all image formats jpg/webp would benefit as well.

You could have an artist or AI render the picture as a cartoon and png will highly compress it more than the original that does not mean PNG is a lossy compression scheme.

You could take a photo with a low resolution camera and it will be smaller than a higher resolution one after PNG compression, again nothing to do with PNG.

It is no surprise simpler images with less information compress better with lossless formats than more complex ones.

Your example implies that the PNG format itself has a lossy mode, when instead its just the way the image is preprocessed which has no limit on techniques that can lose information to make the image smaller independent of the codec.


You can have a PNG and JPG versoin of the original image without no loss, and the files would be about the same size.

You can have a PNG and JPG version of the original image, with the same loss - "average error per pixel", and the files would be about the same size.

I know there is no lossy compression described in a PNG standard. But there exist programs which produce tiny PNG files with a loss. So comparing an MP4 encoder with a bult-in loss to a PNG encoder without any built-in loss is not fair. They should have made the MP4 encoder encode the video in a lossless way, just like they did with PNG.


It reminds me my WebGL First-Person-Shooter https://dinohunt2.ivank.net that I made 10 years ago :)


BTW. For the last 15 years, all web browsers on Windows do WebGL on top of DirectX (using the Angle library https://github.com/google/angle).


I think they should siply use four patches of BC1 (DXT1) texture: https://en.wikipedia.org/wiki/S3_Texture_Compression

It allows storing a full 8x8 pixel image in 32 Bytes (4 bits per RGB pixel).


> 4 bits per RGB pixel

That sounds inferior. From the article:

> ThumbHash: ThumbHash encodes a higher-resolution luminance channel, a lower-resolution color channel, and an optional alpha channel.

You want more bits in luminance. And you probably also don't want sRGB.


Nice idea. I tried it, it works really well: https://imgur.com/a/p3l6ABh

A software decoder would be tiny and you can use an existing good BC1 encoder.


Wow, nice! I wrote a BC1 encoder in JS, it is quite simple :) https://github.com/photopea/UTEX.js/blob/master/UTEX.js#L198


If it's really that simple, looking forward to your github repo that gives folks the JS and Rust libraries to do that =)


The idea of the Incognito mode is, that the website should be unable to detect that you are using the Incognito mode.

There is a bug in Chorme, which I reported, but they told me they will not fix it: https://bugs.chromium.org/p/chromium/issues/detail?id=120485...


You are not detecting incognito mode but another attribute that correlates with incognito mode.


If we ZIP both SVG and TVG, the difference is not that large anymore.

For most of people, wide support is more important than 20% smaller file size. That is why it is hard for WEBP to replace JPG.


I don't think the size is a key advantage here, even if it were true. 96 kilobytes simply isn't very big to start with, considering most of the content that gets sent around.

Like you said, support is what matters. SVG is notoriously difficult to implement and it took forever to be supported everywhere, which also contributed to the persistence of Flash. TVG is supposed to be easy to implement, which seems to be to be the advantage.

JPEG, by contrast, is a pretty simple format with only a few minor quirks.


If someone already implemented an XML parser for you, SVG is easier to implement than TVG (if you are trying to extract the same type of content out of SVG as out of TVG).


>if you are trying to extract the same type of content out of SVG as out of TVG

But then you haven't implemented SVG. You can't say "we support SVG", and you lack a good way to communicate what users can expect from your implementation. It's much easier to communicate clearly when you can implement a whole standard rather than a haphazard subset of one.

Also, parsing overhead is such a tiny fraction of the overall effort (in either case) it doesn't really mean anything.


People writew Wikipedia for free, people answer at StackOverflow for free, we got used to it.


Wikipedia is an aggregator of knowledge and content created elsewhere. It's a wonderful resource and success story.

That "hey, pitch in whenever you feel like it" model works wonderfully for some things but is not a universal or automatic solution.


I wrote a free PDF editor (open a PDF, edit, export a PDF), my users edit around 500,000 PDF files every month.

I have been gradually improving it for the past five years. It is a part of my photo editor https://www.Photopea.com. I know really a lot about PDF, I wish I didn't know that much :D I am glad to see that there are others who try to "make sense" of PDF files instead of just rendering them :)

** fun fact: Often, a PDF contains text as an array of characters, each has its X and Y coordinate and a style (white characters omitted). It is up to you to "cluster" them into words, lines, paragraphs ...

** Often, PDF text is made uneditable (on purpose). You see a text "Hello", but in fact, there is a text "bsiin", and a font, which renders "b" with a shape that looks like a letter "H", "s" as "e", and so on. If you open that PDF in a PDF viewer, select "Hello" and copy-paste it elsewhere, you get "bsiin".


Photopea is fantastic. I don't use Photoshop enough to justify a cloud subscription and adobe has shut down the licensing service for the version I have on disc (CS3).

https://community.adobe.com/t5/photoshop-ecosystem-discussio...


Photopea is a great solution, and I'm both glad it exists and that you are able to solve your issues using it.

But the fact that we as a society have accepted

> adobe has shut down the licensing service for the version I have on disc (CS3).

as something normal and acceptable is insane to me.


I haven't accepted it. I sail harder all the time. The Adobe Creative Suite hasn't been a recent priority, but I should look it up on principal. Thank you.

Photopea also helps when you're at a random computer and can help someone do an edit that would otherwise require access to a computer with software installed.


I also had some exposure to PDF and looking back it's almost better you'd render it then OCR on the rendered page.


> render

I think you meant raster


How do you deal with scanned pdf?


It is usually a PDF containing a single JPG file inside, which you can see and export at the original resolution.

To edit it, I guess you could paint over the text with white, and add a new text on top of it.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: