Refactoring your entire codebase just to use that one ES module that is incompatible with CJS is a big pill to swallow if your codebase is ... big (or you have many).
True, but if you just do it, even if it's a large undertaking, the benefits of ESM-only code are bigger than being able to use more packages. And someday you will have to move to ESM anyway.
On the flip side we have other amazing devs who are also active with cutting edge libraries and simply do the small amount of extra work to make their modules available in both ESM and CommonJS to this day:
Not sure about realtime voiceover and not in emergency services myself but having an LLM fed the call audio to generate supplmentary summary of what was discussed could certainly be beneficial in situations where the operator needed clarity or there was a mistaken address or important detail that needed double-checking after the call.
It has its quirks and even some annoying bugs but where it excells its IMO way better than what any competing/proprietary design tool can do (vector and bitmap exporting, vector+bitmap combined layering, shape/colors/text layout, PDF editing/creating, vector pen tool, etc). I use it to create UI for games & apps and more generally to build sprawling UX scenarios and concept flowcharts.
In more recent builds its performance has become quite good which was a problem it had before. Granted, lots of room for improvement still particularly wish it had more natural flowcharting capability.
Inkscape is amazing, and it's had incredible growth. I remember using it 5+ years ago and it was incredibly buggy and the UX was terrible. I now use it extensively and it rocks.
Concurring that Inkscape today is awesome, while it definitely was not a decade ago (how the time flies... and decade ago it was already... ten years old or so!).
Disclaimer: I'm an engineer not a UX/UI designer, and I use Inkscape mostly for graphic designs and the odd super-simple CAD stuff where I don't think starting qcad is worth it. Still, I'm immensely happy that I can just get stuff done in Inkscape and that everything inside makes sense and is generally discoverable "solely by logic".
Another vote for Inkscape here - amazing tool. I probably use less than 1% of it's functionality, but it is far and away the best PDF editor I've come across (I work with a lot of design drawings that started life as DWG/CAD files, are exported to PDF and sent around for proofing/review and need to have changes marked up on them).
Indeed, that is primarily where it excels. Your layered source files are SVG and you can export to SVG (and you can import in SVG obviously). The bitmap selection/exporting is also excellent as such you can have these massive vector canvases (with any number of bitmaps and vector shapes/graphics mixed in) and quickly export any slice or selection you make without having to resize the canvas or copy/paste somewhere - and it will remember the export path when you click on the object or layer again later (aside from a bug with symlinks on Linux); ideal for iterative work/exporting revisions to clients or colleagues.
Yeah it's great for SVGs. I still use illustrator because the type tools are a lot easier to work with for constant use,I already have a CC subscription grandfathered in at a cheap rate, and some things in it I'm just plain-old used to... But I could deal with Inkscape if I decided to dump Adobe. Gimp not so much. I'd definitely be buying affinity's photo editor or something for raster work.
Amazon's repo here is open source at least so perhaps this could be remedied with an edit to their README and a PR - go for it !
Agree with you there should be credit where credit is due - I have been using QuickJS for some time and its awesome. For the cost of about 1MB you can get entire modern JS in your C/C++ binary.
It's good for quick things and prototyping cause you can always swap out those calls with native later. It's API is generally easier to remember/less typing that most native equivalents. You can also use its API serverside via Cheerio to do parsing & manipulation of html without a dom.
edit: also its way more lightweight than React/Vue/Svelte i don't necessarily disagree you shouldn't reach for jQuery if you have a dynamic page (something like uhtml+preact signals would be good if you have fair bit of rendering logic going on) but I would say you should totally try seeing how far you can get with jQuery instead of Svelte/React/Vue on simple pages.
Is it really way more lightweight than Svelte? Svelte has more tooling (of course) but it ships no runtime and only sends the user the JS they actually need to interact with your page.
Ironically now with AI art tech the fans themselves can now carry on the series as they like (including making animated flicks). Not saying this wouldn't violate C&H's IP protections nor that the resulting product will have the same essence but a creative+technically inclined fan tinkering in spare time now has the power to create an all new 'Calvin & Hobbes book' of their dreams. Even if they want to just do it for fun on their own.
Of course, if you were to put in enough effort to make a complete book (ie- 100+ page Sunday comic collections) that could stand among the rest in the original series probably best to adjust your art style, theme and content enough away from the original to make what you are doing essentially a spiritual successor - enabling you to comfortably share it with the outside world.
ControlNet model specifically the scribble ControlNet (and ComfyUI) was major gamechanger for me.
Was getting good results with just SD and occassional masking but it would take hours and hours to hone in and composite a complex scene with specific requirements & shapes (with most of the work spent currating the best outputs and then blending them into a scene with Gimp/Inkscape).
Masking is unintuitive compared to the scribble which gets similar effect; no need to paint masks (which is disruptive to the natural process of 'drawing' IMO) instead just make a general black and white outline of your scene. Simply dial up/down the conditioning strength to have it more tightly or fuzzily follow that outline.
You can also use Gimp's Threshold or Inkscape Trace Bitmap tool to get a decent black & white outline from an existing bitmap to expedite the scribble procedure.
Comfy ui is really nice. The fact that the node graph is saved as png metadata actually makes node based workflows super fluent and reproducible since all you need to do to get the graph for your image is to drag and drop the result png to the gui. This feels like a huge quality-of-life improvement compared to any other lightweight node tools I’ve used.
Yeah the PNG embedded 'drag and drop to restore your config' is brilliant.
Reminds me of Fireworks which Adobe killed off (after putting out a decent update or two to be fair) which used PNGs for layers and meta ala PSD format.
But its more analogous to a 3D modeller suite like Blender or Maya but with theoretical feature such that you could take a rendered output image and dragndrop it back into the 3D viewport and have it restore all the exact render settings you used instantly back. That would be handy!
You don't need to go through Gimp or Inkscape, this is built-in to the auto1111 ControlNet UI. You just dump the existing photo there and you can select a bunch of pre-processors like edge-detection or 3D depth extraction, which is then fed into ControlNet to generate a new image.
This is super powerful for example visualizing the renovation of an apartment room or house exterior.
Will have to play with those more thx for the headsup; I do find however for scribble outlines I like to often draw my own lines by hand instead of an auto-generated one to emphasize the absolute key areas that would not otherwise be auto-identified. Logo and 2D design for example where you may have very specifc text shape that needs be preserved regardless of contrast or perceivable depth.
That's for sure - I think we have seen other kind of edge detector or filter work better for differing use cases, especially around foreground images where you want to retain more information (i.e. images with small nitty-gritty details)
In this post, we just seek to showcase the fastest way to do it - and how augmentation may potentially help vary the position!
Yeah that tutorial is decent its what I used to get going.
Note that all of the images in those comfy tutorials (except for images of the UI itself) can be dragndropped into ComfyUI and you'll get the entire node layout you can use to understand how it works.
Another good resource is civit.ai and specifically look for images that have a comfy UI embedded metadata. I made a feature request that they create a tag for uploaders to flag comfyUI pngs but not sure if they've added that yet. Or caroose Reddit or Discord for people sharing PNGs with comfy embeds.
Trying out different models (also avail from civit) is a good way to get an understanding of how swapping out models affects performance and the results. I've been abusing Absolutereality (v1.81) + More Details LORA because its just so damn fast and the results are great for almost any requirement I throw at it. AI moves so fast but I don't even bother updating the models anymore there is just so much potential in the models we already have; more pay off would be mastering other techniques like the depth map Control Net.
I would say that above all extensive familiarity with an image editor like Photoshop, Gimp, or Krita - will get you the most mileage particularly if you have specific needs beyond just fun and concepting. AI art makes artists better, people who struggle with image editing will struggle to maximize this new tech just as people who struggle with code will have issues maintaining the code Copilot or ChatGPT is spitting out (versus a coder who will refactor and fine tune before integrating to the rest of their application).
Is there any solution for consistency yet that goes beyond form and structure and gets things like outfits, color, and facial features consistent in an easy way to compose scenes with multiple consistent characters?