Its funny. I have programmed for a decade and at no point I have ever found that an external monitor was the solution to my problems. My coworkers think I'm crazy, but I do all development on a 15' macbook. My path to efficiency and ergonomic work has always been to become more familiar with the keyboard shortcuts for navigating windows and tabs. Along with Spectacle, Vimium, and tree-style tabs, I have never wanted for a second monitor. Mostly I find it annoying because I forget where I put my applications if I have two monitors.
My overall mantra at work is that I can only do one thing a a time if I want to do it well. I feel in a sense limiting myself to one viewport helps reinforce this behavior.
I've used 13" laptops (MacBook Air, Chromebook) for years. I even use my 11.9" iPad when traveling. I mostly use a terminal app ssh'd to a remote server, screen and vim, and switch to a browser window. I prefer to keep my focus on one task in one full-screen window. I find that works best to keep my attention on one thing. Multiple windows, to say nothing of multiple monitors, just create distractions. For me programming and system admin mostly happen in my head, not in a multitude of windows and panes on the screen.
Back in 2005-2008 I worked at a software company that gave developers two large monitors by default, and some people had three -- a status thing for the more senior people. I noticed many of the second displays had Facebook or Twitter on them all day. I don't use social media. I only used one monitor at that job and when I started freelancing I used a laptop and that's worked fine ever since.
I'm old and have worked with screens and keyboards for 40 years, starting with dumb terminals, and never experienced any posture or RSI problems. I know some people do, and they tell me how unhealthy my setup is (no laptop stand or external keyboard). "Ergonomics" is probably more of a personal thing than a hard science.
No, you're just the equivalent of someone who has smoked a pack a day their entire life but doesn't have cancer insisting that that lung cancer business is overblown and unscientific. Even a high risk isn't a guarantee.
I'm not sure what you refer to. What risk do I face using a laptop?
If you mean my statement about ergonomics, look into it yourself, you will find mostly pseudo-science and unsupported claims, and an industry of experts and products that makes a lot of money. If it works for you, great.
External monitor does not necessarily mean multi monitor. I have an external monitor that is big, and has good contrast and refresh rate, but I close the laptop's lid.
Multiple monitors are ergonomically worse than multiple virtual workspaces IMHO - if you can set up keyboard shortcuts instead of mouse gestures.
I am 100% aboard the single screen, single fullscreen app, almost exclusively keyboard work with lots of x-tabbing.
I really like having a second screen though. I keep my laptop open on the side and use it to display "read only" stuff like a build status, consoles, etc.
A second screen is a nice asset in a work from home with lots of video calls with screen sharing. I like having a screen where I can do things in the background while following what is presented in the call. When I present I like choosing a separate screen where I'll move windows in and out depending on what I want to show. People in my call don't have to see my other windows/notifications/mails/whatever when I switch from one app window to another. And it is way easier to have a dedicated screen than switching (or forgetting to switxh) the shared window.
For screensharing it gives a certain peace of my mind having the call in the peripheral vision. But it's still worse to turn your head than switching workspaces.
For a while I used a single HD 32inch monitor and loved it. I avoided complex windows all over the place. But could lean back in my chair and see everything from a distance.
This is a bit of an odd take. Do you think that because you can manage one screen well that you don't need a second screen? Or do you think other people only need a second screen because they can't manage a single one?
More screen real-estate is just that. Effective management of two monitors might be a little extra work but it's worth it.
> My path to efficiency and ergonomic work has always been to become more familiar with the keyboard shortcuts for navigating windows and tabs.
> Mostly I find it annoying because I forget where I put my applications if I have two monitors.
Do you think you could have put this effort into coming up with a workflow that took advantage of multiple monitors? I didn't use multiple monitors for years, but eventually I tried it out and now it would be hard for me to go back, so I've always considered it to have a small learning curve to figure out how to "make use" of the extra space properly. I feel like being willing to use more keyboard shortcuts for windows and tabs would be an excellent way to get more efficient use out of extra screen space.
That's so interesting. I'm the opposite, but I've worked with plenty of people who work like you do. Both styles are pretty common.
My overall mantra at work is that I can
only do one thing a a time if I want to do it well
I'm curious. What about tasks that are "one thing" but may involve >1 windows/apps? For example, referring to documentation while looking at code? Or tweaking CSS/markup while observing the resulting changes in the browser? (If you do that sort of work)
Yeah, I've had 3 monitors forever, and for a while I tried 4 [1].
I do webinars where obviously one monitor is displayed. I find it clumsy - switching between the code, the program, log, browser, docs and so on.
In general work I focus on one task, but I find that many programs are involved at one time. Docs on one, code on another, program on a third and so on.
I also have a need for email to be open, along with Skype etc. Those get hidden often though, hence my need for the 4th.
[1] my experiment with 4 failed because the horizontal spread was too far, and it was tiresome to swivel to the 4th.i considered putting the 4th above the 3, but felt that too might be "out of eyeline". So for now I'm maxed on 3.
Don’t you find looking down at a laptop painful after a while? At a desk I’m okay working on a smaller screen but if I don’t have an external monitor I need to raise it on a few books and use a separate keyboard if I’m going to spend 8 hours a day sitting like that.
I would consider it crazy to not have an external monitor. I always use one.
But that's not because I think it's important to have two monitors. I only use the external monitor; the laptop's built-in screen would only see any use while traveling.
I'm pretty sure that those people use professional laptop stands
which elevate the laptop monitor to ergonomic height in conjunction with an external keyboard
But 'ergonomic work' has a lot to do with posture as dictated by screen positioning, not just size. With OP's setup, the 180 degree thinkpad hinge and external keyboard allow him to sit with relatively good form. I used to have a hotdesking setup that similar [0].
How do you do that with a MacBook, where the screen only bends so far? Do you also use an external keyboard?
Me too, I just use my macbook. I keep 2 buffers side by side in VSCode, and cmd-tab or cmd-` to switch to other apps/windows.
I genuinely suffer from imposter syndrome when I see my colleagues with several huge screens in all sorts of configurations. I guess I'm not a real hacker.
I can only look at one screen at a time anyway, so it's as fast to switch apps than to turn my head to look at a different screen. Plus if I want to move the cursor, I need to switch app anyway (or do a long trip with the mouse and click somewhere for focus which is even worse).
If anything, not spending the time to move your neck but rather manipulate keys to manifest buffers in front of your eyes is the real hallmark of being a hacker.
But only then when moving your eyes is slower than switching between your buffers. Explicitly on Mac, from what most people with just one screen talk there are nice fading-animations that cost at least 500ms.
There's a great MacOS app (AltTab) that replicates Windows' alt+tab window switching. It's fantastic, I highly recommend if you are as a big keyboard window switcher as you sound.
I just found contexts and it's fantastic, it's just like alt tab but also comes with a spotlight esq quick switcher and supports vim jk by default. It's also really useful for switching between desktops by turning "reduce motion" on.
I started using Apptivate a couple of months ago, someone on here recommended it. It makes app switching very pleasingly fast. I use a PC kbd with a mac mini :-) and have F5-F9 assigned to instantly switch to the 5 apps I always have open.
Also, holding down F5-F9 shows those windows as long as they're held down. So now, to take a peek at the bash window I hold down F5 as long as I need to look. Can also type or drag things onto there while holding F5 with the other hand.
You can assign any app to any key or combination of keys. It's free. It can do other stuff too. Very highly recommended.
Also, F1-F3 I have assigned in the system settings to F1=Show desktop, F2=current app windows, F3=show all open windows (mission control). So I very rarely have to use Cmd-Tab any more!
I get it that you work on one thing at a time and you use programs with keyboard shortcuts like spectacle and vimium etc. But, still I can't help but wonder how does your IDE look? I am guessing you have a code editor (generally with a left/right/bottom panes with file explorer, repl, terminal, other tools etc), add a browser with a documentation page open and your real estate is too little and involves a lot of switching / scrolling, very easily due to all the keyboard shortcuts but its switching/scrolling none the less. Maybe you detach all these into their own windows and use different workspaces in your tiling manager, but that's again a lot of switching to even use a repl.
I am happy that your setup works for you, but a bigger monitor and/or maybe even one extra monitor will just help reduce a lot of that switching/scrolling.
My IDE has basically just the code editor yes. I don't need the project Explorer. Zero useful information at most times so why open it? It's not often I need to know where something is. I open files by name via shortcut. Yes our naming is that good.
I don't need a browser with documentation open at all times. I only need to look at it IFF I need to look something up and if so I switch to it and it's full screen. Copy what I need. Done. No need to flip my head over to the place on some huge screen where "documentation lives". Also I almost never need docs as I have auto complete in the IDE.
When I debug half the screen has the debugger open but only when I debug.
I also know how to get to the place I need to with shortcuts at all times. When I see people use this weird "show all windows in small at the same times" feature used by people I die a little inside. Of course it's gonna take them ages to then find the right one. I alt/cmd double tab to the right window faster than the animation to show the windows would be done. I know that my documentation is two alt tabs away and with one alt/cmd tab I'm back in my IDE. I know the terminal is one alt/cmd tab away. I don't understand people that use small terminal windows integrated into an IDE. Small and unusable. I have a terminal window alt/cmd tab-able to at all times and I know which tab inside that window is for what kind of work e.g. logs tailed on tab 2. When I have it open I can 100 focus on it. No distractions.
As a human you can only focus your eyes at one single thing at a time.
Whether you have to move your head/eyes or press a key to switch between the documentation and the IDE does not make much difference in term of efficiency.
I actually find it more comfortable, ergonomic and fast to have a single screen and switch the workspace. That also means that my (physical) setup is minimal and easy to manage, and that I do not get unnecessary distractions or things moving (notifications, messages, ads, animations...) in my peripheral vision.
It's on the macbook flair, which is a foldable cell phone with a keyboard. Developers have learned about the power and low cost for this machine and have started to use it as a laptop.
The multiple folds of the screen and keyboard are what allow the flair's 15 foot footprint to be carried around in a phone's form factor.
I have one of the original 16 core versions of these. Come to find out, Apple has recently released 80 and 128 core versions of the flair, so I may need to upgrade next year for the additional horsepower to run data models.
I use shortcuts extensively too. And sometimes work exclusively on my laptop.
That said, it's helpful to be able to use my project while seeing all the logs involved. And that gets extremely claustrophobic on a laptop. Especially if you have an application that is heavy on both the client and server side - chrome, chrome devtools, re-frame-10x sidebar debugger, server side logs/debugger.
It gets even worse when I'm working on one project, which is a client/server game using Unity.
> Mostly I find it annoying because I forget where I put my applications if I have two monitors.
Your applications should always be in the same place. You can still use keyboard shortcuts too.
Beyond that, because when I first got a 30" monitor it was so much real estate and it took so long to move my mouse between windows, I wrote a little program that let me set and restore mouse positions with global hotkeys. It will also bring whichever window is under the mouse position to the front.
So I can see all logs at once as I'm interacting with the project. Including the debugger. Everything always goes in the same spot. And if I want to interact with one of those windows, I can use a hotkey which will instantly set my mouse cursor to a known good position within that window, which enables me to interact with even a GUI in a reproducible fashion rather than having to slow boat my cursor over to it.
> Beyond that, because when I first got a 30" monitor it was so much real estate and it took so long to move my mouse between windows, I wrote a little program that let me set and restore mouse positions with global hotkeys. It will also bring whichever window is under the mouse position to the front.
On Windows, the focus-follows-mouse feature does both these things. Along with raising and focusing whatever you mouseover, it also moves the mouse to any window you alt-tab too. Sadly, it's been tuned weirdly in new versions of windows so it isn't very useful now.
That's cool, I didn't know such a thing existed - when I ran into the problem I was at a loss on what to search for to make the large monitor more usable.
Too bad it's weird now.
I don't use windows anymore, but when I first did this it was for windows, and it was only something like 100 lines of C to accomplish including all the standard windows boilerplate.
I don’t mind coding on a 12 or so inch laptop screen. But editing a LaTeX file is kind of a pain, because it is nice to have the code and the document side by side, but half a screen is quite small. I guess people working on websites must have similar issues?
Personally I don't mind seeing only the LaTeX code and looking at the pdf once in a while after compiling. But a second screen to keep open and accessible background papers or notes would be very convenient
I’ve been looking for an eink screen for exactly this kind of thing.
Unfortunately they only seem get be put in expensive tablets, or monitors that sort of want to be a main monitor replacement (expensive).
There’s an eInk panel that seems sort of niche-popular, 7.5 inch with 5s refresh, reasonably priced (used in some waveshare projects) but it seems that you can only get the bare board unfortunately.
I mostly agree. I enjoy my 40” 4k monitors, but I’ve found that sometimes constraining myself to a laptop display can improve my focus. However, laptop keyboards, even those on macs and thinkpads, are poorly suited for extended use I think. The author is using an external keyboard, and that makes the setup much more usable in my opinion.
I'm using 4 monitors, two 33" and two vertical 22".
It has nothing to do with knowing how to do shortcuts, but having to do them at all. There are plenty of instances where you may only need to provide input to one window at a time, but see other things, like documentation, output, logs, or a browser window of the page you are editing. Having multiple monitors allows me to greatly reduce the need to switch between windows to reference something or see results of my inputs.
I personally find that if I'm forced to alt-tab between my IDE and documentation, like when I'm working on a laptop, it's incredibly distracting, breaks my flow, and slows down my work significantly.
I only use one monitor and use virtual desktops and side-to-side windows for context switching. But it's the external monitor if it's available, it is more ergonomic to work on a big screen, at the optimal distance and height from the eyes, and with the neck and spine properly aligned rather than curving down.
I use only 50% of my MacBook screen for actually writing code, and in fact I bet I could use less than that. I could SSH from my phone to a dev server and just work off a Bluetooth keyboard and the Terminus app. The advantages of being a powerful vimmer.
Interestingly enough with a Macbook I always only used one external monitor instead of 2 as I am used to. IMO the controls were kinda weird (for me) so it felt less painful just sticking to one.
For me, it's not so much the screen size as the ergonomics. Haviong the screen at the correct height to enable good posture. Separate mouse an keyboard for comfort.
But you can't type when the laptop is on a laptop stand.
Then you need an external keyboard, mouse and mousepad. At that point you're not far off from just slapping a monitor on the table instead of a laptop stand.
You should look up portable monitors. They are crazy small, cheap and good nowadays.
You can get a 15-16" OLED external display with a battery for around 300 western monetary units. Half that if you stick to IPS panels[0]. You can shove one of those in a backpack easily.
how do you deal with 4-up situations - where you need the comp or design doc, the code, the test window, and the debugger / dev tools, all available for you to glance back and forth between?
Chicken and egg, I think. I am a firm believer that people don't have much better or worse mental health now over-all. But I do think that in the past mental health was something that was more intuitive and that people got more support for from their immediate social circle. Now that society is further isolated, people both have more time to ruminate about the specifics of their mental health, and the solution to having bad mental health is more frequently to talk to a therapist.
Even this would not by itself cause the increased medicalization of totally normal human problems. However, therapists need to bill insurance and it is not really possible to bill insurance for general malaise. Additionally, more people are expected to be rational economic agents more of the time, and rational economic agents don't have any emotions. So people that fall outside of the narrow bounds of what is considered acceptable and exhibit behavior outside of the narrowing band of that which allows for gainful employment means that more people are categorized as having specific ailments.
People that /are/ able to crush themselves into the little boxes that are required of them to not end up on the streets have their own mental health problems as spill-over.
TL;DR: society optimized for money causes people to feel crazy.
I might sound a little bit 'tinfoil hat' here, but I believe that what follows is not hyperbole. AI is already the 'Architect' more than most of us would like to admit. Even if it is not sentient, the various AIs that we use during the day were designed with a purpose and they are goal oriented. It is worth reading Daniel Dennet's thoughs about the intentional stance--we know that a toaster is not sentient, but it was designed with a purpose and we know when it is or is not achieving that purpose. That is why we might sometimes jokingly say that the toaster is 'angry with us' or that when the toaster dings that it is happy. It is actually easier for us humans to interact with objects when we know that they have a purpose, because that is similar to interacting with other humans who we know to have purposes.
Coming back around to AI, ChatGPT was designed with a purpose, and people project intent onto it. People act like it is an agent. And that is all that matters. The same is true of the Tiktok AI, the AI that calculates your credit score, the traffic lights by your house. Hell, it's also true of your stomach.
The point is that objects in our environment do not have to be literally conscious for us to treat them as conscious beings and for them to fundamentally shape the way that we live and that we interact with our environment. This is pretty much the basic tenet of cybernetics. To believe that all of these tools do not have intention and that they are 'just tools' used by some people to influence other people is not wrong, but I don't think that it captures the richness of the story.
Differentiating where humanity/consciousness begins and where the technology ends is already more complicated than most people think. Traffic lights train us just as much as we make traffic lights. I fully believe that people will be saying "this isn't true AI, it doesn't /really/ have feelings" long after the technology that we create is so deeply embedded into our sensory I/O that the argument will be moot.
That's part of why there's objection to claiming sentience, it distracts from the discussion of impact by dragging a whole lot of extra philosophical baggage into the conversation when it's not yet necessary.
That's kind of exactly my point when I say they are not architects, just tools: I agree 100% that people project intent into these things, and I believe that's exactly what our "ex Google employee" is doing here - and it's dangerous.
It's dangerous in part exactly because it shifts the responsibility for the acts of the tool to the tool itself, and away from its author. Like deforestation was the machine's fault, not the fault of the humans driving them.
I can never agree with your affirmation that "AI is already the Architect". It is not, the AI does not design anything. It does not plan anything. It has no ideas, no critical thinking, no judgement of value or morals. The AI just does what it's told, like a tool, a worker ant, or any other algorithm. It's complicated enough that it's not obvious to us what it was told to do, but ultimately it's all it can do.
I understand your point, and I'll agree to disagree. I think it just has to do with what we value. Even though it is a tool, tools change what options we have. If you have a shovel, 'digging holes' is now an optional activity for you to pursue that wouldn't have been otherwise. Is the shovel the 'architect' of your ability to dig holes? Maybe no, but the tool-human interaction is a back and forth. Tools generate affordances and humans choose to act on affordances.
Maybe put it this way, if there was an AI that could plan out your day in a way that would optimize some metric of happiness that you agree with, you might start to use the AI. Is the AI the architect of your day because it plans it out and tells you what to do, or are you still in charge because you could choose to stop using the tool, even though it would not be in your best interest?
I think this is the point that we are reaching with AI: it is a tool that is so flexible that it doesn't just offer single affordances, but begins to be used as a guiding function for what decisions to take. At that point I think it /is/ an architect of some kind.
Again though, this is mostly just quibbling about definitions and terms.
This is a possibility that shouldn't be dismissed. Trees use mycorrhizal networks to communicate and have been around for a very long time on this planet. They model the environment and use either a set of micro-decisions or a set of larger, slower moves that are made across longer timescales than humanity is used to. You can argue whether they possess sentience or not, but when discussing models, decisions, and consequences - trees seem to act with plenty of coordination and understanding and self-interest.
I have no evidence for this, but my feeling was always that the highest-volume exploits were just having a bot run yesterday's Day-0 on every IP listening on a port. You can't get that kind of volume by calling people and asking for their password.
If you leave an unsecured mail server accessible to the internet, it'll start sending spam emails within 30 minutes.
On the other hand, phishing emails are also automated, and that's essentially asking for the password.
It's probably safe to say that phishing is the most common method among APTs like state intelligence agencies. It's cheap, it's easy, it works. No reason to burn zero-days unless simpler methods with less exposure don't work, and they usually do.
But we can broadly categorize security incidents into two bins: first are opportunistic attackers which broadly attempt a method that sometimes works. Two common examples are minimally-targeted phishing emails (think Best Buy invoice) and automated scanning for old versions of WordPress with known vulnerabilities. Second are targeted attacks, where the attacker chooses a target and then attempts different methods to reach success. Overall targeted attacks are far less common than opporunitistic ones, but because they involve a higher level of effort they're only attempted when there's a high level of motivation. Targeted attacks tend to result in greater financial losses than opportunistic attacks, for example, because compromising machines to add them to a botnet usually isn't worth the effort of a targeted attack, but getting banking credentials or crypto wallets usually is.
All of information security is fairly bimodal in this way. It often seems like even technical professionals like software engineers struggle to understand basic security practices, but I think this is one of the biggest causes: most people tend to think about one case and ignore the other. Unfortunately one of the things that makes security very difficult is that both cases are real and the two require fairly different practices to deter, prevent, and detect.
Social methods are far more common with targeted attacks because "true" social engineering involves a higher level of effort, like time on the phone. That said, phishing falls into an in-between where some consider it to be a social method but it is amenable to widespread automation. There's also a wide spectrum of effort in phishing. Many are tempted to try to categorize phishing activity into a binary of "phishing" and "spear-phishing" (I hate these terms), but that doesn't really reflect reality very well. In a large corporation you can usually find examples of phishing that are targeted to varying degrees of specificity: at anyone, at corporate employees broadly, at people in the industry, at employees of a company, a department in that company, and even carefully tailored to a specific employee. The frequency of course tails off as you get more specific, but then it's not that unusual for some organized crime group to run a sustained campaign of fairly closely-targeted phishing as happened recently with Twilio.
Opportunistic attacks are certainly greater in volume to the extent that some call them "internet background noise," but most think that targeted attacks probably produce greater total financial damage. Security is very faddish though, not only on the defense side but also on the offense side, so it probably varies from year to year. For example, the emergence of ransomware was a major trend that required a strategic shift in defense in many organizations since ransomware attacks were fairly low effort but also very high damage in many cases.
Mostly startups use GSuite, big traditional companies and banks still run their own. But that wasn't really my point. My point is that there are a lot of bots looking for low-hanging fruit.
In 2011 I spent hours writing a script to brute force a wifi password at a hotel because I didn't want to pay $5 a day for wifi. It worked. I was pleased with myself.
When I checked out they gave me a receipt and I went to throw it away and saw a handful of wifi passwords in the trash bin.