Does it work for touchscreens too? When I plug in a portable monitor with a touchscreen into my macOS laptop the touch input gets sent into the screen where the cursor is (ie. I touch the touchscreen but it clicks something on the internal display, because this is where I left the cursor), instead of always inputting on the physical monitor associated with this touchscreen.
Music production is one for sure. Macs are the standard there and each screen usually represents a different "function" and is set up near the device or instrument it relates to. Most people use a macbook or ipad per "station" but for some setups you could run it all off one machine if the inputs were right. There are even workflow advantages to doing it that way. I've heard people wish for this.
I have a screen for my simrig (racing wheel and pedals setup almost like a car cockpit) with a wireless keyboard that has a touchpad in it mounted to the rig. On my desk across the room, I have a screen, mouse and keyboard, and the PC that powers that rig. I have to turn one or the other monitor off in order to just use the one mouse next to it on it alone. If this was for Windows also, I would use it to keep the monitors separated with their own input devices.
I could imagine several creative workflows that could use setups separated like this, perhaps in a photo studio or a CNC shop.
My Windows computer is connected to both a monitor on my desk, and a TV way over the other side of the room for playing games that make more sense that way. As the sibling post here mentioned, it can be kind of annoying to pick up the mouse and find the cursor is on the other side of the room.
I'm not a Mac user but something like this would be nice for me. I'm visually impaired, and I have to be very close to the screen so I have my main monitor on an arm so it can float above my keyboard. Then I have a second screen (or laptop) on a desk next to me, but to see that I have to turn 90 degrees and lean in, which makes it awkward to use my keyboard/mouse. This is better with a laptop because it gives me a second keyboard/mouse for the second monitor, but I have to keep moving the cursor back and fourth. My solution so far has been to use Autohotkey to define some hotkeys that move my mouse to the center of my main display and the center of my secondary display, which works surprisingly well.
Although to be honest, I rarely use the second display. I mainly use it while screen sharing in a meeting so I can have notes and stuff up on my main display.
I think that an auto-highlight-on-move, similar to mashing CTRL constantly on Windows, only with the screen-darkening ability of the double-CTRL-tap of Microsoft Powertoys, would be more effective for you.
As in, with every significant (user-magnitude-defined) motion of the mouse, all screens darken/grey-out except for a bright spotlight surrounding your mouse. Allows you to always see where your mouse is even up to near-blindness, as that spotlight is going to be 3-6cm in diameter regardless of the screen resolution, and obvious as heck.
Edit: been looking for a tool like this myself. Still have decent vision, but with six monitors at ≈1.5-2k resolution each, a cursor can go missing, fast. Would love to have cursor highlighting (no stupid Win98 “tail”, thanks) on any mouse movement.
Wow, really neat idea! Even if you're just using two monitors close to each other (a laptop and a monitor for example), I could see scenarios where it would help a ton to have fast access on one or the other.
Especially with magic track pad, I could imagine this working quite well. You might be able to use both simultaneously with a single hand for some use cases.
Interestingly enough, macOS is multi-seat. You can start a desktop environment as different users over VNC without interrupting the physical user of the machine.
This is an Electron app, but it never opens a web view; it configures the tray using the native menu bar API. Neat.
I wish it was this easy to start with Swift. The SwiftUI API makes the menu stuff almost this easy, but dealing with Xcode projects and Apple Developer program auth stuff always put me off.
I was also surprised it's a TS project. I will say learning Swift and native macos dev during the last year without a job has been quite fun, and a nice departure from web stuff. Swift's type system is very similar to Typescript, and SwiftUI is somewhat similar to any component UI library, but it has it's own interesting bits. Xcode is a beast, but can be a decent IDE after you get familiarized to the quirks of the language and build process.
I find Xcode so unbelievably unstable for a modern set of developer tools. The last time I was using it to build software, it crashed about once a day, and froze or did weird things regularly. I've also had severe typing lag (!!) and phantom build issues where I build something and it fails with some strange linker error or something, but then doing a clean & rebuild mysteriously fixes everything.
And SwiftUI seems great on the surface until you spend time with it and try to actually do anything remotely custom. I spent about a week trying to get a swipe gesture able to push the keyboard out of the way (like you see in Apple Notes and most messaging applications). The web is full of half working hacks that throw up warning messages when the app is running. When things don't work its incredibly difficult figure out whats going on. The documentation is atrocious and the debugging tools are absolute rubbish compared to what you'll find in any modern web browser.
Coming back to writing typescript in VS Code or writing rust in Intellij felt like a breath of fresh air compared to working in Xcode. Everything works, and it works fast and reliably. I have good documentation and I can read the code I'm calling if I get confused.
I really like Swift as a language and I think Apple has a lot of great ideas there, but developing a swiftui app using apple's developer tooling was a miserable experience. Xcode feels like a cobbled together, buggy mess.
Yeah SwiftUI has a very good happy path, flanked on either side by a 1000 foot cliff spike pit.
My least favorite thing is when the compiler takes a look at my view body and times out trying to infer some kind of generic, and the tells me “hey this 100 line code block has an error, but we don’t know where or what exactly” so I have to comment out code until I find the specific call expression.
or actually split it out into reasonable chunks of code. It's the compiler encouraging you to create smaller views. This is better for rerendering performance and code quality
A compiler in 2024 should be able to handle a function of 100 lines. That's like 1.5 screen heights of code for me, pretty normal! I'd have some sympathy if the function was 10,000 lines, but it'll happen with a 15 line block or a 30 line block or a 60 line block.
I've written code in a bunch of languages and it's the first time that a compiler refuses to compile something, but can't tell me why or which expression is incorrect. "There's something wrong with your function" then I have to go play type checker and hover over every expression wondering what the issue is? I'm just shocked about it every time it happens.
I think I'd agree with your assessment, and some of those problems are what I was thinking of when I wrote my comment, especially the documentation.
> Coming back to writing typescript in VS Code or writing rust in Intellij felt like a breath of fresh air compared to working in Xcode. Everything works, and it works fast and reliably. I have good documentation and I can read the code I'm calling if I get confused.
Some of the tools we use are fast because they have less or no baggage, and some aren't because they have tons. In terms of what Xcode tries to accomplish, it fairs better and is capable of more than any tool I can think of, but there's no way it could compete with anything else that has a relatively austere set of requirements. It is indeed a cobbled together buggy mess, but I just don't think you get 20+ years in without a bit of that.
IntelliJ is an interesting example in that they have managed to build tools that are both rich with features and somewhat performant, and to your implied point, they probably have to in order to compete; if all Xcode did was compile raw Swift and let you visually debug, it would almost certainly lose that fight. Maybe VSCode or Nova will usurp Xcode, and make the experience of just writing Swift and SwiftUI faster against Foundation, AppKit UIKit, etc.. but then you'll need to still likely run it on some emulator, potentially analyze performance issues, preview layouts visually, maybe run cloud builds, configure CoreData models, blah blah blah. Not all the time, but it's there.
Comparatively, other tools just delegate responsibility to other huge projects that haven't always been around or all that great or not great enough. Writing any web code without Chrome or Firefox devtools would suuuuuuck, and ultimately that's usually what I'm doing. Webstorm and VSCode are.. ok at the full package I guess, but not as good as discrete applications.
Webpack is also miserably slow and arcane compared to esbuild or vite, but have their own limitations.
> Some of the tools we use are fast because they have less or no baggage, and some aren't because they have tons. In terms of what Xcode tries to accomplish, it fairs better and is capable of more than any tool I can think of,
Hard disagree on this. I can't think of any IDE that performs as badly, or is as unreliable as Xcode.
Intellij is significantly more complex and its still more responsive. (And its reliably responsive. I've never experienced typing lag in intellij on my M1). Xcode will lag even when I have plenty of cores doing absolutely nothing. As far as I can tell, the xcode UI doesn't seem to be in a separate thread from autocomplete queries. So if there's work happening "in the background" (like looking up function signatures for autocomplete), typing becomes janky. Intellij does seriously well for itself given its written in Java, and it doesn't use native controls. And it handles about 10 different languages. Xcode honestly has every advantage here, but still manages to drop the ball.
And if you want to say xcode has been around for longer with more technical debt, well, Microsoft's Visual studio (not vs code) is honestly fabulous compared to xcode. VS is of a similar vintage to xcode, with just as many "cobbled together" features added over the years - from VB to C# and .net, winforms to WPF to whatever the latest thing is. But its still fast and reliable. Well, its fast and an order of magnitude more reliable than xcode.
And for the record, its a really bad look when the people writing developer tooling can't develop code very well. I expect better from Apple.
> Hard disagree on this. I can't think of any IDE that performs as badly, or is as unreliable as Xcode.
Well then, great material for an interesting dialogue :)
> Intellij is significantly more complex and its still more responsive. (And its reliably responsive. I've never experienced typing lag in intellij on my M1).
I don't know if even the full IntelliJ is more complex, but it is at least quite complex while being reliable, and deserves a lot of credit for that. I've always defended it, particularly for it's robust semantic full project search capability for which it also happens to do better than XCode. If they brought back or renewed a product for building against Apple platforms, I'd consider it. My main argument is that it would be a truly remarkable feat for any company or project to compete with Xcode and reach close to feature parity without relying on features already part of Xcode. It still supports building and compiling (based on a cursory search) Pascal, C++, C, ObjC, ObjC++, and even Python, Java, and Ruby, simulation on at least 6 different platforms/device types, interface building click and drag for mac and iOS, some kind of AR/VR environment, and I think is used to build itself. I'd use PyCharm though if building any sufficiently complex python app, and Android Studio presumably for that, so some of the theoretical capabilities of Xcode are already better served by others. A hypothetical competitor shouldn't try to replicate all of those though, instead shooting for a market of anyone who's developing for the last 10 years of platforms in just Objective-C, Swift, SwiftUI would be the move.
> Microsoft's Visual studio (not vs code) is honestly fabulous compared to xcode. VS is of a similar vintage to xcode, with just as many "cobbled together" features added over the years - from VB to C# and .net, winforms to WPF to whatever the latest thing is. But its still fast and reliable. Well, its fast and an order of magnitude more reliable than xcode.
You're probably right, but only has relatively recently had a mac version, otherwise I'd be working with what I'd anecdotally say is just an annoying as hell and unstable OS where all the legacy stuff is still highly visible. I've never pushed Visual Studio hard, but I do recall it being as decent as you say.
Ultimately what I'd like to see is more diversity in the native apple platform dev space, and for the Xcode team to confront the reliability and performance problems you mention and everyone experiences. I believe they introduced a new linker last year for example, and hope to see it improve dramatically. Maybe the right move for them is to build a much more nimble first-party sibling solution. I'm not even on an M1+ yet, still this shitty Intel thing, and I hear these issues daily when my jet engine spins up.
Yeah; I’m not claiming either IntelliJ or visual studio are viable replacements for Xcode when you want to make an iPhone or Mac app. My point is that the engineering quality of both visual studio and IntelliJ is dramatically better than the engineering quality of Xcode.
Building a good, reliable, fast IDE with modern features is clearly a task other software teams have been able to succeed at. Xcode has no excuse for its bugs. It feels like the result of demo driven development - where features are worked on just enough to demo them - either internally or at wwdc. But it takes more work to ship a good product than it does to make a snazzy demo. And, for some reason that work just doesn’t seem to happen in the Xcode team. Not since Xcode version 3 or so.
I was going to mention the typescript as well. How does typescript in electron get access to multiple mice hardware and also control multiple cursors?
I would have thought that would be near impossible without going native.
When all current operating systems make the fundamental assumption of only one application window having “focus” at one time (even down to individual application elements having exclusive focus), how does having multiple cursors do anything other than massively cock-block everyone in control of a cursor?
I can see this as being nothing more than a Battle Royale for control of whatever UI element you are trying to interact with, while fighting off the control that other users want to have of other UI elements that they are trying to interact with.
In other words, a massively shitty hairball where everyone gets frustrated to the point of rage-quitting.
Edit: just try using a remote-access/remote-support software, like RustDesk, where each user (when more than one have access to a desktop) get their own cursor (even if you can’t see the other person’s cursor). You end up walking on eggshells trying to avoid the other person’s application/UI focus.
This seems to be very useful, but can anyone illustrate what the use cases are? In my own uses, it would have limited value since I can do 98% of what I need to do with a single mouse.
I don't think this hack makes macOS actually have to pointers, it just teleports the pointer to the screen associated with the input device you're using.
What happens when you put 2 cursors over the same app, and the app checks for cursor position (e.g. video games, on-hover UI)? Will it get one of 2 cursors? Or will it see nothing, and then when you click something, the cursor will "teleport" to the place where you clicked? Or does it use some magic multitouch macOS API?
Will there be 2 cursors at the same time? The way I read the GitHub page, I thought there would only ever be 1 cursor at a time. When you switch mice, I thought the cursor would teleport to the other monitor, to the place where it was last used on that other monitor.
I think surely it 'teleports' - the one activating hovers is presumably the last moved; I think others must be dummy placeholders as it were, not actually independent simultaneously usable cursors.
I think the non-capitalized ‘s’ isn’t a typo. They mean studios using Macs, not people using Mac Studios.
FTA: “DynaMouse allows you to assign a specific display to a dedicated mouse device (including the built-in mac trackpads) so that when you have multiple screens in a studio-like/complex workstation setup (and far apart from each other), you don't have to drag your mouse over to the other screen.”
Yeah, as an owner of a Mac Studio with a work MacBook Pro on the side, I was confused as to why my Mac Studio specifically would need mouse drivers. Should've spotted that it was a lowercase S.
Personally, I would've probably tried to avoid confusion by wording it a bit differently (like "Mac-based studios" or "Mac-using studios") knowing that there is a product called "Mac Studio".
Maybe they didn't know there was a product called 'Mac Studio'? I didn't.
(Seems to be a cross between the Mac Mini & Pro? Pretty sure used to be just the high-end Mini, not differently named. Or low-end Pro, I seem to remember they suddenly made the Pro a lot more expensive a few years ago.)
I was playing with words (under the assumption that anyone interested would know what a Mac Studio is, and that this problem regularly comes up in a studio setting where your displays can be far apart.
This is something frustrating. When a mouse is connected to a MBP, there is a separate system preference for the mouse and trackpad. However, the natural scrolling switch from each one follows the other, yet the tracking speed is able to be set separately. It is yet another sign of the cluster fuck that Apple's OS teams are
This is what I use, it does the job. Flakes out when I plug in occasionally (5% of the time maybe?) but "ope, scroll wheel is backward, time to power-cycle the extension" isn't a crisis.
As others have mentioned, there's quite a few options, and I use mos (https://github.com/Caldis/Mos) for no particular reason. It does the same things as any of the alternatives, I just found it first and stuck with it since.
I think it's relatively straightforward? Each mouse should be generating input events, it's "just" a matter of "Mouse1 += (100x, 200y)", "Mouse2 += (-3x, -5y)" and keeping track of a virtual cursor/pointer position that the "real" cursor should jump to depending on which mouse is generating input events.
...and for the use cases, having an extended desktop (eg: airplay to HDTV mounted on the wall) and being able to have your primary "desktop" mouse 100% glued to your main screen, but a secondary "click the play next video button on the tv" mouse is genius!
I'm pretty sure if somebody were sufficiently innovative they could paint a bullseye/target around the virtual cursors with some sort of minor performance penalty (a-la: xNeko - https://github.com/crgimenes/neko).
In my case the /submit form threw me off, haven't used it much and the "text is optional if url is present part" didn't really register. I've probably been conditioned to modern UI which really guides you along i.e. I thought "text" was a required field