Hacker News new | past | comments | ask | show | jobs | submit login
SAM 2: Segment Anything in Images and Videos (github.com/facebookresearch)
824 points by xenova 88 days ago | hide | past | favorite | 147 comments



Hi from the Segment Anything team! Today we’re releasing Segment Anything Model 2! It's the first unified model for real-time promptable object segmentation in images and videos! We're releasing the code, models, dataset, research paper and a demo! We're excited to see what everyone builds! https://ai.meta.com/blog/segment-anything-2/


Code, model, data and under Apache 2.0. Impressive.

Curious how this was allowed to be more open source compared to Llama's interesting new take on "open source". Are other projects restricted in some form due to technical/legal issues and the desire is to be more like this project? Or was there an initiative to break the mold this time round?


LLMs are trained on the entire internet so loads of copyrighted data, which Meta can’t distribute, and is afraid to even reference


This argument doesn't make sense to me unless you're talking about the training material. If that is not the case, then how does this argument relate to the license Meta attempts to force on downloaders of LLaMa weights?


they're literally talking about the training material.


data is creative commons


Yeah, but there's a CLA for some reason. I'm wary they will switch to a new license down the road.


So get it today. You can't retroactively change a license on someone.


Yeah, but it's a signal they aren't thinking of the project as a community project. They are centralizing rights towards themselves in an unequal way. Apache 2.0 without a CLA would be fine otherwise.


Grounded SAM has become an essential tool in my toolbox (for others: it lets you mask any image using a text prompt, only). HUGE thank you to the team at Meta, I can't wait to try SAM2!


Huge fan of the SAM work, one of the most underrated models.

My favorite use case is that it slays for memes. Try getting a good alpha mask of Fassbender Turtleneck any other way.

Keep doing stuff like this. <3


I've been supporting non-computational (i.e. scientists) to use and finetune SAM for biological applications, so excited to see how SAM2 performs and how the video aspects work for large image stacks of 3D objects.

Considering the instant flood of noisy issues/PRs on the repo and the limited fix/update support on SAM, are there plans/buy-in for support of SAM2 on the medium-term beyond quick fixes? Either way, thank you to the team for your work on this and the continued public releases!


stupid question from a noob: what exactly is object segmentation? what does your library actually do? Does it cut clips?


Given an image, it will outline where objects are in the image.


and extract segments of images where the object are in the image as I understand it?

A segment then is a collection of images that follow each other in time?

So if you have a video comprised of img1, img2, img3, img4 and object shows in img1 and img2 and img4

Can you catch that as a sequence img1, img2, img3, img4 and can you also catch just the object img1, img2, img4 but get some sort of information that there is a break between img2 and img4 - number of images break etc.?

On edit: Or am I totally off about the segment possibilities and what it means?

Or can you only catch img1 and img2 as a sequence?


I'm not in the field and what SAM does is immediately apparent when you view the home page. Did you not even give it a glance?


Yes I did give it a glance, polite and clever HN member, it showed an object in a sequence of images extracted from video, and evidently followed the object from sequence.

Perhaps however my interpretation of what happens here is way off, which is why I asked in an obviously incorrect and stupid way that you have pointed out to me without clarifying exactly why it was incorrect and stupid.

So anyway there is the extraction of the object I referred to, but also seeming to follow the object through sequence of scenes?

https://github.com/facebookresearch/segment-anything-2/raw/m...

So it seems to me that they identify the object and follow it for a contiguous sequence. Img1, img2, img3, img4, is my interpretation incorrect here?

But what I am wondering is - what happens if the object is not in img3 - like perhaps two people talking and shifting viewpoint from person talking to person listening. Person talking is in img1, img2, img4. Can you get that sequence or is it just img1, img2 the sequence.

It says "We extend SAM to video by considering images as a video with a single frame." which I don't know what that means, does it mean that they concatenated all the video frames into a single image and identified the object in them, in which case their example still shows contiguous images without the object ever disappearing so my question still pertains.

So anyway my conclusion is what said when addressing me was wrong, to quote: "what SAM does is immediately apparent when you view the home page" because I (the you addressed) viewed the homepage I wondered about some things? Obviously wrong things that you have identified as being wrong.

And thus my question is: If what SAM does is immediately apparent when you view the home page can you point out where my understanding has failed?

On edit: grammar fixes for last paragraph / question.


> A segment then is a collection of images that follow each other in time?

A segment is a visually distinctive... segment of image, segmentation is basically splitting an image into objects: https://segment-anything.com, as such it has nothing to do with time or video.

Now SAM 2 is about video, so they seem to add object tracking (that is attributing same object to the same segment throughout frames)

The videos in the main article demonstrate that it can track objects in and out of frame (the one with bacteria or the one with boy going around the tree). However they do acknowledge this part of the algorithm can produce incorrect result sometimes (example with the horses).

The answer to your question is img1, img2, img4, as there is no reason to believe that it can only track objects in contiguous sequence.


Thanks!


Classification per pixel


will the model ever be extended to being able to segment audio (eg. different people talking, different instruments in a soundtrack?)


Check out Facebook DeMucs, and more newer: Ultimate Vocal Remover project on GitHub


There are a ton of models that do Stemming like this. We use them all the time. Lookup MvSep on Replicate.com


That would be really cool to try out. I hope someone is doing that.


I wonder if it can be used with security cameras somehow. My cameras currently alert me when they detect motion. It would be neat if this would help cameras become a little smarter. They should alert me only if someone other than a family member is detected.

The recognition logic doesn't have to always be reviewing the video, but only when motion is detected.

I think some cameras already try to do this, however, they are really bad at it.


Frigate use both motion detection and object detection. Object detection is usually done with one of the Yolo models.


Is there a reason Texans can't use the demo?


Texas and Illinois. Both issued massive fines against Facebook for facial recognition, over a decade after FB first launched the feature. Segmentation is I guess usable to identify faces, so may seem too close to facial recognition to launch.

Basically the same issue the EU has with demos not launching there. You fine tech firms under vague laws often enough, and they stop doing business there.


[flagged]


Your suggestion is that Meta is just too ethical?


Awesome model - thank you! Are you guys planning to provide any guidance on fine-tuning?


Oh, nice!

The first one was excellent. Now part of my Gimp toolbox. Thanks for your work!


How did you add it to gimp?



Thank you for sharing it! Is there any plans to move the codebase to a more performant programming language?


Everything in machine learning uses Python.

It doesn't matter much because all the real computation happens on the GPU. But you could take their neural network and do inference using any language you want.


It's all C, C++ and Fortran(?) under the hood so moving languages probably won't matter as much as you expect.


i covered SAM 1 a year ago (https://news.ycombinator.com/item?id=35558522). notes from quick read of the SAM 2 paper https://ai.meta.com/research/publications/sam-2-segment-anyt...

1. SAM 2 was trained on 256 A100 GPUs for 108 hours (SAM1 was 68 hrs on same cluster). Taking the upper end $2 A100 cost off gpulist means SAM2 cost ~$50k to train - surprisingly cheap for adding video understanding?

2. new dataset: the new SA-V dataset is "only" 50k videos, with careful attention given to scene/object/geographical diversity incl that of annotators. I wonder if LAION or Datacomp (AFAICT the only other real players in the open image data space) can reach this standard..

3. bootstrapped annotation: similar to SAM1, a 3 phase approach where 16k initial annotations across 1.4k videos was then expanded to 63k+197k more with SAM 1+2 assistance, with annotation time accelerating dramatically (89% faster than SAM1 only) by the end

4. memory attention: SAM2 is a transformer with memory across frames! special "object pointer" tokens stored in a "memory bank" FIFO queue of recent and prompted frames. Has this been explored in language models? whoa?

(written up in https://x.com/swyx/status/1818074658299855262)


A colleague of mine has written up a quick explainer on the key features (https://encord.com/blog/segment-anything-model-2-sam-2/). The memory attention module for keeping track of objects throughout a video is very clever - one of the trickiest problems to solve, alongside occlusion. We've spent so much time trying to fix these issues in our CV projects, now it looks like Meta has done the work for us :-)


> 4. memory attention: SAM2 is a transformer with memory across frames! special "object pointer" tokens stored in a "memory bank" FIFO queue of recent and prompted frames. Has this been explored in language models? whoa?

interesting, how do you think this could be introduced to llm? I imagine in video some special tokens are preserved in input to next frame, so kind of like llms see previous messages in chat history, but it's filters out to only some category of tokens to limit size of context.

I believe this is trick already borrowed from llm into video space.

(I didn't read the paper, so that's speculation on my side)


I might be minority, but I am not that surprised by the results or the not so significant GPU hours. I've been video segment tracking for a while now using SAM for mask generation and some of the robust academic video-object segmentation models (see CUTIE: https://hkchengrex.com/Cutie/ presented at CVPR this year.)for tracking the mask.

I need to read SAM2 paper, but 4. seems a lot like what Rex has in CUTIE. CUTIE can consistently track segments across video frames even if they get occluded/ go out of frame for a while.


Seems like there's functional overlap between segmentation models and the autofocus algorithms developed by Canon and Sony for their high-end cameras.

The Canon R1 for example will not only continually track a particular object even if partially occluded but will also pre-focus on where it predicts the object will be when it emerged from being totally hidden. It can also be programmed by the user to focus on a particular face to the exclusion of all else.


Of course Facebook has had a video tracking ML model for a year or so - Co-tracker [1] - just tracking pixels rather than segments.

[1] https://co-tracker.github.io/


The web demo is actually pretty neat: https://sam2.metademolab.com/demo

I selected each shoe as individual objects and the model was able to segment them even as they overlapped.


It's super fun! I used it on a video of my new cactus tweezers: https://simonwillison.net/2024/Jul/29/sam-2/


I guess the demo simply doesn't work unless you accept cookies?


Are there people who don’t accept cookies?

Don’t most websites require you to accept cookies?


In many jurisdictions requiring blanket acceptance of cookies to access the whole site is illegal, eg https://ico.org.uk/for-organisations/direct-marketing-and-pr... . Sites have to offer informed consent for nonessential cookies - but equally don't have to ask if the only cookies used are essential. So a popup saying 'Accept cookies?' with no other information doesn't cut it.


You don't need consent for functional cookies that are necessary for the website to work. Anything you are accepting or declining in a cookie popup shouldn't affect the user experience in any major way.

I know a lot of people who reflexively reject all cookies, and the internet indeed does keep working for them.


For those who are interested. Things that can change are:

- ads are personalized (aka more relevant/powerful to make you want things).

- The experience can become slower when accepting all cookies due to the overhead generated by extensive tracking

In essence, there should be no relevant reason for users to accept cookies. Even accepting and rejecting should be equally easy. The only problem is that companies clearly prioritize pushing users to accept cookies because the cookies are valuable to them.


Always refuse them, close to zero problems.

I can’t think of a technical reason a website without auth needs cookies to function.


I don't. I see a few sibling comments who don't accept them either. And now I'm curious to know if there's a behavioral age gap - i.e. have the younger crowd been defacto-trained to always accept them?


If someone gives me the choice i don’t.


I reject cookies on the regular. Generally do not see any downsides for the things I browse.


I never accept any they don’t force me to accept.


I think under the GDPR this is even illegal.


It is giving me "Access Denied".


Might have issues if you're from Texas or Illinois due to their local laws.


What is the Illinois law?

Edit: Found lower in thread: biometric privacy laws


I tried on the default video (white soccer ball), and it seems to really struggle with the trees in the background, maybe you could benefit of more of such examples.


"The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."


Same :(

Just a guess, maybe it's the VideoFrame API? It was the only video-related feature I could find that Chrome and Safari have and FF doesn't.

https://caniuse.com/mdn-api_videoframe


> This research demo is not open to residents of, or those accessing the demo from, the States of Illinois or Texas.

Are there laws stricter than in California or EU in those places?


Try tracking the table tennis bat


Really cool. Doesn't really work for juggling unfortunately, https://sam2.metademolab.com/shared/fa993f12-b9ce-4f19-bb75-...


It looks like it’s working to me. Segmentation isn’t supposed to be used for tracking alone. If you add tracking on top, the uncertainty in the estimated mask for the white ball (which is sometimes getting confused with the wall) would be accounted for and you’d be able to track it well.


The blog post (https://ai.meta.com/blog/segment-anything-2/) mentions tracking as a use case. Similar objects is known to be challenging and they mention it in the Limitations section. In that video, I only used one frame, but in some other tests even when I prompted in several frames as recommended, it didn't really work, still.


Yeah, it's a reasonable expectation since the blog highlights it. Just figure it's worth calling out that SOTA trackers are able to deal with object disappearance well enough that when used with this it would handle things. I'd venture to say that most people doing any kind of tracking aren't relying on their segmentation process.


Reference?


I’m not sure what you are looking for a reference to exactly, but segmentation as a preprocessing step for tracking has been one of, if not the primary, most typical workflow for decades.


I bet it would do a lot better if it had a more frames per second (or slow-mo)


I think the first SAM is the open source model I've gotten the most mileage out of. Very excited to play around with SAM2!


> ...the first SAM is the open source model I've gotten the most mileage out of

How's OpenMMLab's MMSegmentation, if you've tried it? https://github.com/open-mmlab/mmsegmentation

It seems like Amazon is putting its weight behind it (from the papers they've published): https://github.com/amazon-science/bigdetection


What have you found it useful for?


Annotating datasets so I can train a smaller more specialized production model.


I wish there was a similar model like this, but for (long context) text.

Would be extremely useful to be able to semantically "chunk" text for RAG applications compared to the generally naive strategies employed today.

If I somehow overlooked it, would be very interested in hearing about what you've seen.


Semantic chunking. This is an intriguing idea.

I feel like one could do this with a chain of LLM prompts -- extract the primary subjects or topics from this long document, then prompt again (1 at a time?) to pull out everything related to each topic from the document and collate it into one semantic chunk.

At the very least, a dataset / benchmark centered around this task feels like it would be really useful.


Yeah, I do think that's possible with LLM, just too slow and expensive to be usable in most settings.


Anyone have any home project ideas (or past work) to apply this to / inspire others?

I was initially thinking the obvious case would be some sort of system for monitoring your plant health. It could check for shrinkage / growth, colour change etc and build some sort of monitoring tool / automated watering system off that.


I used the original SAM (alongside Grounding DINO) to create an ever growing database of all the individual objects I see as I go about my daily life. It automatically parses all the photos I take on my Meta Raybans and my phone along with all my laptop screenshots. I made it for an artwork that's exhibiting in Australia, and it will likely form the basis of many artworks to come.

I haven't put it up on my website yet (and proper documentation is still coming) so unfortunately the best I can do is show you an Instagram link:

https://www.instagram.com/p/C98t1hlzDLx/?igsh=MWxuOHlsY2lvdT...

Not exactly functional, but fun . Artwork aside it's quite interesting to see your life broken into all its little bits. Provides a new perspective (apparently, there are a lot more teacups in my life than I notice).


Wow, that’s really cool!


After playing with the SAM2 demo for far too long, my immediate thought was: this would be brilliant for things like (accessible, responsive) interactive videos. I've coded up such a thing before[1] but that uses hardcoded data to track the position of the geese, and a filter to identify the swans. When I loaded that raw video into the SAM2 demo it had very little problem tracking the various birds - which would make building the interactivity on top of it very easy, I think.

Sadly my knowledge of how to make use of these models is limited to what I learned playing with some (very ancient) MediaPipe and Tensorflow models. Those models provided some WASM code to run the model in the browser and I was able to find the data from that to pipe it though to my canvas effects[2]. I'd love to get something similar working with SAM2!

[1] - https://scrawl-v8.rikweb.org.uk/demo/canvas-027.html

[2] - https://scrawl-v8.rikweb.org.uk/demo/mediapipe-003.html


Nice! Of particular interest to me is the slightly improved mIoU and 6x speedup on images [1] (though they say the speedup is mainly from the more efficient encoder, so multiple segmentations of the same image presumably would see less benefit?). It would also be nice to get a comparison to original SAM with bounding box inputs - I didn't see that in the paper though I may have missed it.

[1] - page 11 of https://ai.meta.com/research/publications/sam-2-segment-anyt...


How do these techniques handle transparent, translucent, mesh/gauge/hair like objects that interact with background.

Splashing water or Orange juice, spraying snow from skis, rain and snowfall, foliage, fences and meshes, veils etc.


State of the art still looks pretty bad at this IMO.


Hi from Germany. In case you were wondering, we regulated ourselves to the point where I can't even see the demo of SAM2 until some other service than Meta deploys it.

Does anyone know if this already happened?


It’s more like “Meta is restricting European access to models even though they don’t have to, because they believe it’s an effective lobbying technique as they try to get EU regulations written to their preference.”

The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.

These free models and apps are bargaining chips for Meta against the EU. Once the regulatory situation settles, they’ll do what they always do and adapt to reach the largest possible global audience.


> Meta is restricting European access to models even though they don’t have to

This video segmentation model could be used by self-driving cars to detect pedestrians, or in road traffic management systems to detect vehicles, either of which would make it a Chapter III High-Risk AI System.

And if we instead say it's not specific to those high-risk applications, it is instead a general purpose model - wouldn't that make it a Chapter V General Purpose AI Model?

Obviously you and I know the "general purpose AI models" chapter was drafted with LLMs (and their successors) in mind, rather than image segmentation models - but it's the letter of the law, not the intent, that counts.


> The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.

No technical reason, but legal reasons. IIRC it was about cross-account data sharing from Instagram to Threads, which is a lot more dicey legally in the EU than in NA.


It’s not like Meta doesn’t know how it works. They ship many apps that share accounts like FB + Messenger most prominently.

They’ve also had separate apps in the past that shared an Instagram account, like IGTV (2018 - 2022).

The Threads delay was primarily a lobbying ploy.


No, it really was a legal privacy thing. I worked in privacy at Meta at that time. Everybody was eager to ship it everywhere, but it wasn't worth the wrath of the EU to launch without a clear data separation between IG and threads.


Not saying you're wrong, but in this instance it might be a regulation specific to Germany since the site works just fine from the Netherlands.


Sounds like big tech's strategy to make you protest against regulating them is working brilliantly.


Regulation in this space works exclusively in favor of big tech, not against them. Almost all of that regulation was literally written for the benefit and with aid of the big tech.


Hi also from Germany - works fine here


Looking at it right now from Denmark. You must have some other problem.


Which German regulation prevents this? Is it biometric related?

It seems that https://mullvad.net is a necessary part of my Internet toolkit these days, for many reasons.


> This research demo is not open to residents of, or those accessing the demo from, the States of Illinois or Texas.

Alright, I'll bite, why not?


I know Illinois and Texas have biometric privacy laws; I would guess it's related to that. (I am in Illinois and cannot access the demo, so I don't know what if anything it's doing which would be in violation.)


It's because their biometric privacy laws are written in such a general way that detecting the presence of a face is considered illegal.


I'm kinda on board with this.


So there will be a lot of blurry portraits coming from Illinois and Texas as autofocus can't find faces? /s


> We extend SAM to video by considering images as a video with a single frame.

I can't make sense of this sentence. Is there some mistake?


Everything is a video. An image is the special case of length 1 frame


Here's a sentence I would understand: > We extend SAM to video and retrofit support for images by considering images as a video with a single frame.

As it is written, I don't see the link between "We extend SAM to video" and "by considering images as a video with a single frame".


I read it like this:

- "We extend SAM to video", because is was previously only for images and it's capabilities are being extended to videos

- "by considering images as a video with a single frame", explaining how they support and build upon the previous image functionality

The main assumptions here are that images -> videos is a level up as opposed to being a different thing entirely, and the previous level is always supported.

"retrofit" implies that the ability to handle images was bolted on afterwards. "extend to video" implies this is a natural continuation of the image functionality, so the next part of the sentence is explaining why there is a natural continuation.


I would like to train a model to classify frames in a video (and identify "best" frame for something I want to locate, according to my training data).

Is SAM-2 useful to use as a base model to finetune a classifier layer on? Or are there better options today?


Has anyone built anything cool with the original SAM? What did you build?


One thing its enabled is automated annotations for segmentation, even on out-of-distribution examples. e.g. in the first 7 months of SAM, users on Roboflow used SAM-powered labeling to label over 13 million images, saving over ~21 years[0] of labeling time. That doesn't include labeling from self hosting autodistill[1] for automated annotation either.

[0] based on comparing avg labeling session time on individual polygon creation vs SAM-powered polygon examples [1] https://github.com/autodistill/autodistill


As mentioned in another comment I use it all the time for zero-shot segmentation to do quick image collage type work (former FB-folks take their memes very seriously). It’s crazy good at doing plausible separations on parts of an image with no difference at the pixel level.

Someone who knows Creative Suite can comment on what Photoshop can do on this these days, one imagines it’s something, but the SAM stuff is so fast it can run in low-spec settings.


Grounded SAM[1] is extremely useful for segmenting novel classes. The model is larger and not as accurate as specialized models (e.g. any YOLO segmenter), but it's extremely useful for prototyping ideas in ComfyUI. Very excited to try SAM2.

[1] - https://github.com/IDEA-Research/Grounded-Segment-Anything


We use SAM to segment GUI elements in https://github.com/OpenAdaptAI/OpenAdapt


I used it for segmentation for this home climbing/spray wall project: https://freeclimbs.org/wall/demo/edit-set

It does detection on the backend and then feeds those bounding boxes into SAM running in the browser. This is a little slow on the first pass but allows the user the adjust the bboxes and get new segmentations in nearly real time, without putting a ton of load on the server. Saved me having to label a bunch of holds with precise masks/polygons (I labeled 10k for the detection model and that was quite enough). I might try using SAM's output to train a smaller model in the future, haven't gotten around to it.

(Site is early in development and not ready for actual users, but feel free to mess around.)


We are using it to segment different pieces of an industrial facility (pipes valves, etc.) before classification


Are you working with image data or do you have laser scans? If laser scans, how are you extending SAM to work with that format?


Wonder if I can use this to count my winter wood stock. Before resuscitating my mutilated Python environment, could someone please run this on a photo of stacked uneven bluegum logs to see if it can segment the pieces? OpenCV edge detection does not cut it:

https://share.icloud.com/photos/090J8n36FAd0_lz4tz-TJfOhw


Heads up that link reveals real name. Maybe edit it out if you care


thx for the heads up :) full name is in my HN profile. Good to know iCloud reveals that.


Would love to use it for my startup, but I believe it is to self-host on a server with GPU? Or is there an easy to use API?


I ran it with 3040x3040px images on my MacBook M1 Pro in about 9 seconds + 200ms or so for the masking.


Previous SAM v1 you can use e.g. in here:

https://fal.ai/models

https://replicate.com/

You just have to wait probably few weeks for the SAM v2 to be available. Hugging Face might also have some offering


It's OSS, so there isn't an "official" hosted version, but someone probably is gonna offer it soon.


What happened to text prompts that were shown as early results in SAM1? I assume they never really got them working well?


Thank you for this amazing work you are sharing.

I do have a 2 questions: 1. isn't addressing the video frame by frame expensive? 2. In the web demo when the leg moves fast it loses it's track from the shoe. Does the memory part not throwing some uristics to over come this edge case?


Impressive, wondering if this is now out of the box fast enough to run on iphone. Previous SAM had some community projects such as FastSAM, MobileSAM, EfficientSAM that tried to speed up. Wish when Readme reporting FPS, provided on what hardware it was tested


I’d guess testing hardware is same as training hardware, so A100. If it was on a mobile device they would have definitely said that.


Very excited to give it a try, SAM has had great performance in Biology applications.



Will it handle tracking out of frame?

i.e. if I stand in the center of my room and take a video of the room spinning around slowly over 5 seconds. Then reverse spin around for 5 seconds.

Will it see the same couch? Or will it see two couches?


I think it depends how long it is out of frame for, there is a cache that you might be able to tweak the size of.


Interesting how you can bully the model into accepting multiple people as one object, but it keeps trying to down-select to just one person (which you can then fix by adding another annotated frame in).


This is great! Can someone point me to examples how to bundle something like to run offline on a browser, if possible at all?


Anyone managed to get this to work on Google collab? I am having trouble with the imports and not sure what is going on.


Does it segment and describe or recognize objects? What "pipeline" would be needed to achieve that? Thanks.


This is a super-useful model. Thanks, guys.


Somewhat related: is there much research into how these models can be tricked or possible security implications?


Cool! Seems this is cuda only?


Can run on CPU (slower) or AMD GPUs.


What about Mac/Metal?


this is what I was getting at, i tried on my mbp and no luck. might be just an installer issue but I wanted confirmation from someone with more know-how before diving in


I got SAM 1 to work with MPS device on my MacBook Pro M1, don’t know if it works with this one too.


Huge fan of the SAM loss function. Thanks for making this.


Trying to run https://sam2.metademolab.com/demo and...

Quote: "Sorry Firefox users! The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."

Wtf is this shit? Seriously!


Any use of this category of tools in OCR?


Roughly how many fps could you get running this on a raspberry pi?


It's amazing!


Awesome! Loved SAM already made our Segmentation problem so so so much better.

I was wondering why the original one got deprecated.

Is there now also a good way for finetuning from the officaial / your side?

Any benchmarks against SAM1?


How many days it will take to see this in military use killing people …




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: