Starting today, every time I feel like ignoring accessibility in my applications because "no blind person is likely to use them", I'll remember this blog and punch myself in the face.
Holy shit, I admire that guy so much. Being able to program whilst lacking sight astounds me. I wonder how he got into it.
Becoming blind is one of my biggest fears and I consider programming to be one of my favourite activities on the planet, I'm happy that if the worst were to ever happen to me, I wouldn't be completely screwed. However I gotta wonder how well he's able to hold all his code in his head just off hearing it, whenever I program I often go back and read and re-read parts I've already written, I imagine having to hear it over just glancing over it would slow the whole process down a lot. I know he mentioned that he's gotten very good at mentally conceptualising his code which no doubt takes a lot of training but damn, a really large codebase would throw me for a tizz.
I'm a programmer by profession, but at one stage there were no such jobs available, so I went back to teaching. Teaching office computer skills (typing, Microsoft Office, drawing apps, ...) to kids who were "*-challenged", i.e. blind, deaf, mentally impaired and disturbed, and others.
The most exhausting, and rewarding, job I've ever had.
Vincent (one of the blind guys) ran into trouble with Word. Can't remember what the problem was, but I solved it with a simple Word macro. His flabber was gasted - not only did he then grok the fact that all the programs had code behind them, but the code was all plain text, hence was often easier to comprehend in a screen reader than what he heard from most programs. Even text in Word can be a pain to hear when every font change is also announced.
Of course the text might be easier to hear, but the logic behind can also tougher to grasp. But Vincent loved it and wanted to learn, and I much preferred teaching VB than teaching Word, so we soon had a programming class going within the office skills class. Some found it interesting, but Vincent found it easy and got a City & Guilds 425 Application Programming cert from the course. Went on to get a job programming before I did.
So I'd say he got into it the same way as me - one day I sat down in front of a terminal and wrote some code, which eventually worked, and I was hooked!
It was just easier for me because the terminal was more accessible and no-one thought it was "obvious" that I wouldn't be able to use it.
I imagine to some extent, due to the speed of his reader, he can audibly browse his code. This is just a guess.. My mind is actually getting all sorts of accessibility ideas for coders. If you could have hot keys to call out the function your in, call out all the methods in the current file(structure/class view etc). Jump to files. I have no idea what the current state of the art is, but I bet it could be streamlined a ton.
LISP's might make an interesting language of choice to due to the simplicity of their syntax and the ease of navigation through forms. Hmmmm.
Audibly browsing through lines of code is something I do all the time, yep :) Also checking it using braille helps, its a bit more ...hmm ...direct in some ways. As for calling out functions and such, a lot of IDE's can already do that. Have a look at the outline view in Eclipse, as well as the annotations you can skip between using hotkeys which is useful for finding errors in your code
There was another post about this months ago, and the blind poster said that while C is usable (thanks to brackets), python is completely unusable or unreadable to them.
There is absolutely nothing preventing blind coders from using Python. Indentation can easily be reported by any screenreader I am aware of and it is a very clean, screenreader-friendly language for the rest. I really enjoy working with that language
While it makes sense that it would be unusable without proper treatment of the indentations, I don't think it would be hard to adapt the screen reader. Note that the Python parser itself converts indentations to begin / end tokens, which are exactly like braces, so those can be read out loud to make the indentations explicit.
This article had me thinking — what if the different levels of scope were represented by reading the text at different pitches? Kind of like rainbow parentheses with lisp for sighted people.
It'd be really interesting to try out a technique like that with a lisp, I think.
I've personally helped TV Raman (http://en.wikipedia.org/wiki/T._V._Raman) re-setup his screenreader and Emacs stuff multiple times when his local PC required re-installation. He hacks on emacs code pretty effectively when he's not doing other things. IIRC different emacs fonts/faces are rendered via the reader as slightly different tones to the voice.
(I also helped introduce him to stumpwm when he absolutely had to have a GUI to do some specific testing; he was shocked and pleased that such a thing existed)
TV Raman was in a numerical optimization class with me in grad school. He kept up with blackboard lectures, which were densely mathematical, along with everyone else, just by listening to what the professor was saying. He was one of the outstanding students in the class. An amazing person.
I actually had to go through that line by line to even see the difference since as I said I tend to ignore braces, brackets and parens unless I need them. For this, working with audio only would just be a bad idea and braille display would make this a lot easier
It should basically read the token stream instead of the text. For Python that includes indent and dedent tokens, which are equivalent to open and close brackets.
A large codebase is a pain if its not organized properly ...I like MVC a lot for that reason :) if people adhere to the conventions, its easy to find what's what
Thank you so much for writing this amazingly inspirational article. You have so much to offer to sighted coders. Reading about your experience almost makes me wonder how much better all software would be if by some fluke of history we would have never invented visual representations of code, and we were all required to write code just like you do. It seems to me that like all the important abstractions of software engineering take on a much higher priority when you can't rely on your sight to make sense of the mess. "Less is more" indeed!
Please keep sharing your experience with the community, I think your input can bring about more benefits to sighted developers than you may have realized.
A blind friend of mine is a programmer, and honestly can hold the structure of his code in his head far better than I can. The same is true of just about anything else - he's quite capable of holding the state of an entire game of Scrabble in his head while still playing better than I can.
This brings up a really interesting point - how do blind people visualise code, conceptually? When people talk about stacks, heaps, lists etc, there's generally a visual representation that goes along with it. I wonder what kind of abstractions blind people use.
Being blind doesn't mean you don't conceptualize things visually, or at least spatially.
When I was younger and being taught how to get around a new area, I had a hell of a time trying to get my instructors to draw me a map. Not sure if I wasn't explaining myself well or if those instructors just decided to play stupid, but I had to fight to get even a simple drawn tactile map, and once I had one everything more or less clicked into place. Now that we have accessible touchscreens on just about every modern mainstream OS, I'd love a shared whiteboard app that could accessibly render UML diagrams or whatever else drawn on one tablet to a roomful of connected phones, laptops and tablets. You couldn't necessarily convey shapes and such exactly, but if you could position a shape meaningfully and add some sort of access hint metadata (I.e. "downward-pointing arrow") I could spatially explore a UI or system diagram and everyone else can have their pretty pictures.
Are there any blind compiler/interpreter hackers? A programming language optimized for the blind would make an interesting esolang [0]. I guess a REPL works great. Does Forth or Lisp read better to the blind? Can you do syntax "coloring" in sound? Are static types helpful or is type inference prefered? Would they like an editor like ed?
Check out Emacspeak[0] for one of the best audio coding environments I've worked with. Not a big fan of Raman's "ignore decades of accessibility work and run Emacs apps for everything!" approach, but based on my recollections of 15 years or so ago, it was one of the best coding environments I've worked with. It did a sort of audio syntax highlighting with different voices for different tokens, and more or less nailed auditory bracket/paren matching.
I can't for the life of me code in any Lisp-like language. Too much nesting, and whereas speech synthesis inserts pauses at commas and other punctuation marks, Lisp's lack of them makes it hard to parse a heavily-nested function call by ear. This is true to a lesser extent with Haskell and its emphasis on ., $ and other operators that change how a function call is structured.
As always, opinions expressed are my own, and shouldn't be taken as a statement of how collective blind people do X. I'm one of many, so please don't walk away from this thinking Lisps are hard for blind folks. They're tough for me, and above are my particular reasons why.
To be very honest with you, I never really think about the stack and the heap. I know the concepts, but I tend to zoom into my code far closer. I work with the current function only, its inputs and outputs. Someone on Twitter recommended TDD to me, which indeed is only a small logical step from what I am already mentally conceptualizing
>Fortunately, some fellow campers at the Free Code Camp were sympathetic towards my plight and volunteered to transcribe all these slides for me. This offer left me 'flabbergasted', as our dear western neighbors across the sea would say.
Many ions ago, I volunteered for an organization called "Recordings for the Blind and Dyslexic" (now known as Learning Ally - http://www.learningally.org/). Groups of individuals (mostly retired professors and other students) would record textbooks for college students. It was all volunteer and donation based. It would typically take days to weeks from starting a book until the recording was ready.
I loved almost all of it. The one thing I didn't like was that we would read a book in shifts and you wouldn't always be working on the same book from shift to shift, so you might read scene three of a play one shift, then the next day read chapter seven of a calculus 2 textbook. Regardless, it was always interesting and we always knew that there were students benefiting from our effort. As an extremely nearsighted child, one of my fears growing up was that I would grow to be so nearsighted I would be functionally blind, so it was a little personal for me.
Since then, I've been in charge of 508 conformance on many different websights [1]. I have always appreciated the sensory-challenged sharing how they are, or are not, able to use websites. I never cease to be amazed at the human ability to adapt and overcome such challenges!
[1] Freudian slip that I noticed but decided was worth sharing ;-)
Thank you so much for the link to learning ally. That is going to solve so many problems for my wife. We've been searching for audio versions of some of the text books they have listed there for months.
I'd like to know: what is the most comfortable posture for coding once you no longer have to look at a screen? I've wondered whether a syntax-sparse language like iolanguage might allow you to code entirely by voice and ear, with no need for keyboard. For those of us with vision, imagine having a lounge with a large screen on the wall. You can talk into your headset as you pace around, or lie on the couch.
Technically I can already sort of do that in any language. Do keep in mind that all I need to interact with a computer is a keyboard and some way to hear audio ...so I can in fact lay back on the couch with a keyboard on my lap and code away. In fact ... I do that all the time ;) I can't check my work using braille that way though, which is kind of essential with very complex code
>Premier tools that coders use every day, like the IntelliJ editor, as well as all its offshoots (PHPStorm, WebStorm, PyCharm), are completely inaccessible, due simply to the fact that the developers of these programs have not adhered to the accessibility guidelines. They've failed to give screen readers textual labels or accessibility descriptions to work with. The same goes for applications like SourceTree, which is slowly getting better, but is still a pain to use.
Have you tried VIM, eMacs, Sublime or Brackets; if so, how would you rate them?
What is your experience with HN's interface? I know I frequently wish there were more visual indicators of new/unread responses, but I can at least scan the beginning of messages and skip ones that I've previously read; do you have a corresponding way to skip responses?
No, I have to read through all the comments over and over and quickly arrow along when I notice its a comment I've already seen. As for your editor questions:
Brackets is sort of usable, but too much of a pain to actually be useful.
Vim ...I really have no idea how to use it, and I think thats not a 'being blind' thing :P I should look into it more
Sublime is, like the IntelliJ, completely inaccessible
Emacs works well regarding you can get your braille display hooked up to Linux, which is a little tricky for me in my current configuration. Also, there's Emacspeak which I am having a hard time getting to run because I don't speak Lisp :)
Do you have trouble with voting up either comments or stories on HN?
If you check out the HN page source you'll notice the HTML anchors for voting are essentially empty save for an empty HTML DIV element with it's background image set to the arrow icon. Needless to say, the way it's designed is dead wrong and highly problematic for systems like terminal text browsers.
Oddly enough no, I can read the vote up buttons just fine and should in fact use them more :) maybe some JS behind the scenes that is taking care of it
I'm not the original author, but a blind coder nonetheless, so i think it's worth responding anyway. Sublime is an accessibility disaster, unfortunately, Emacs can be used, but it's much more productive with EmacsSpeak or Speechd-el. Vim would likely need something similar, because the advanced cursor movement commands have no screen reader feedback by default.
The HN ui is quite okay, but some semantic comment nesting (like disqus comments, perhaps?) would help. Might be some filtering for new comments since last visit.
Thanks for writing this article. I've often wondered if I could still program if I lost my eyesight; this is reassuring.
When I consider at expressions with infix notations, I visually scan then from left to right and right to left. Visually matching parenthesis is also important.
I get the impression that if one were to listen to an expression in one pass, reverse polish notation would be the most intelligible. What's your experience like here? Does it make a difference?
Infix notation is when the operators goes in the middle. For instance, the expression: x * (y-z) uses infix notation. When I look at it, I can see at a glance that x is multiplied by the difference between y and z. However, were I to listen to it, I wouldn't know when to expect the parenthesis to close. I would have to remember that I am within a parenthesis.
Postfix notation, or reverse polish notation places operators after the operands. So that same expression would read " x y z - * ". The idea is that you push x on a stack, then y, then z. Then comes the minus sign, you pop the top two elements, y and z, and push their difference on the stack. Finally you pop the two elements, x and y-z, and replace them with their product.
In reverse polish notation, you still need to remember something, the state of the stack, but there are no parenthesis. I get the impression it would easier to parse if listening to the expression, but I may be wrong.
So which parses most easily for you? x * (y-z), or x y z - * ?
Thanks for writing this post. I'm visually impaired (but not blind) and can get by making minor adjustments to my work environment (font size, contrast, etc), but reading this was truly inspiring. Thank you.
I tend to use 8-dots US braille, simply because I am used to it. Working with other tables would work just as well though if that's what you are used to, there's no 7 and 8 dot features that set it apart
About coding by voice, there was an article some time ago about a programmer who developed a bad case of RSI and came up with a system to code by voice: https://news.ycombinator.com/item?id=6203805
I hear that more often :) It was actually set at a reasonably fast clip in that bit of audio ...like you'd use for a novel. For speedreading I tend to go a bit faster
A friend of mine, a programmer, lost his sight. He was an Emacs user and could more or less continue programming thanks to Emacspeak, a package for Emacs that alters the voice depending on the syntactic construct of the word it is reading. It goes without saying that without Emacs and Emacspeak he would have had an uphill struggle returning to his job.
Wow ...so many questions ...I'm not sure where to begin answering all of these :) HN is throttling me, so please come find me on #zersiax on freenode to discuss this if you have more questions :) I hope this goes through ...
As a web developper, I would LOVE if you could compile a list of things I should do to make your life easier. I have read the best practices. However, I would love to have a list that comes from you. What are some websites that are doing it well?
Please come find me on IRC or twitter for this :) any list I could draw up will leave you with a lot of questions, and asking those would just get frustrating in a medium like this. If you rather use another form of communication, please let me know and I'll see what I can do
I never had a chance to talk to somebody who was blind all his life and I've always been curious about that. It seems pretty obvious, that blind person can be dreaming, because hearing or smell are senses as much as eyesight is. I cannot comprehend the opposite: does person, blind all his life actually understand what "seeing" means? Of course he knows from the communicating with the others, that he lacks some ability, which most people have, but does he "feel" it somehow? Especially it is interesting with well-read people: writers often spend quite a large portion of the book describing how something looks. So, literate blind person must be well aware of words like "color", "beauty" (addressing the look of something), "bright", "dark", "dull", "picture" and such. But if he never ever saw — do all these words mean anything to him? Does he have idea of what it is like "to see"?
I have been almost totally blind all my life. I have some light perception and from talking to people the nearest I can come up with for colors is that black is when it is totally dark, gray is a normal room with lights turned on, and white is looking directly at the son. I associate colors with objects such as grass being green, brown being dirt, blue being the ocean etc but this isn't actually useful information.
Thank you for the response. I don't know if having some light perception can be called "totally blind", because the sense of seeing isn't totally unfamiliar to you, probably it's more like "very-very bad eyesight", am I right?
But still, as I understand you somewhat have a grasp of what color actually is, it isn't just "some word other people use", is it?
But what of notion of visual beauty? Does it mean something to you, when somebody says that this picture, or woman, or sculpture is beautiful? Can you tell if sculpture is "beautiful" or not after touching it with your fingers? If so, would it be meaningful to discuss something like that with your seeing friends or you would be more likely addressing something completely different from what they do?
And a couple more questions, if you don't mind. Are you completely functional in well-known environment? For example, how hard cooking in your own kitchen is for you? And how much of a problem it is if somebody was working in the kitchen before you and left some items, like knife, in the wrong places? How hard it is to you to move in completely unfamiliar environment? Like, say, can you travel to some distant new location completely on your own, without a dog or another human? How long would it take to being accustomed to the new environment, like when being guests at somebody's place? Could you take a walk in the forest or a big park on your own? Would you feel insecure being there for the first time?
I'm sorry if I'm bothering you, it's just I really, really wanted to ask all these questions and more for quite a long time.
Number of questions :) I'll see if I can answer them, at least how I think about it. :)
For me, color is abstract enough for me to be unable to imagine it. Asociating colors with objects is something a lot of blind people do, but it turns into parameters on an object that way. Color = blue; there's no description ... blue = null pointer exception;
As for visual beauty, this is incredibly difficult at least for me to fully comprehend, because you sighties have the annoying habit of changing what you think is beautiful every so many years :P
Beauty might be the smoothness of the sculpture, its simetry, its proportions being exactly right. That is the only, somewhat clinical, description of beauty I can think of, at least visual beauty. I must say that when it comes to beauty in humans I tend to disregard it completely. People look like what they look like and that is subject to change anyway as time goes on, no use worrying about it if I can't see it myself anyway.
I myself live on my own, therefore I am forced to cook, clean, do my own laundry etc. if I like it or not. This, like a lot of other things, is something you learn to live with. I can safely say that yes, I am fully functional in my own home :)
Changes in the kitchen area can be a little annoying, but would never severely throw me off. There's only a finite amount of place the knives could've gone, to use your example. I tend to lay out everything I need before starting to cook though, to avoid ...less than ideal situations when such a thing happens.
Unfamiliar environments are a bit of a tricky thing. You are right in assuming a dog is an incredible help in such situations,I am hopefully receiving my first dog some time this year but I see time and time again how this affects the confidence and independence of blind people I know. To answer your question though, it really depends on the person. Some people have a very good sense of direction and wouldn't have a problem in a forest or would get used to another's house quickly enough. I myself have a bit more trouble with that, but I do hop onto a train to an unfamiliar city if I know someone will be at the final station to pick me up and take me where I need to go, thats not something every blind person would do. It depends on a huge number of factors.
The sort answer to your question is no. Just like dreams being unable to use any visual imagery, the same goes for concepts like bright, dull, color, etc. are abstract concepts that I really can't put any particular image to. Rule of thumb is that if you can't fully describe it in words that don't involve referring to visual cues, I don't know what it is :) As for feeling it ...it depends. I myself can still see differences between light and dark, as long as they are different enough. Think lightning at night, a lamp turning on in a dark room etc There's no color, shape or anything like that when I see that though. I therefore kind of know what seeing is, but only a very small portions of it. For people who are even more blind than I am, meaning ...no vision at all ... they wouldn't really 'feel' it, its oftencompared to imagining you have a sense that you actually don't have, like a dolphin's ability to always know where north is ...at least ...it was dolphins that coulddo that, no?
Maybe it's like picturing 4D objects is for us? I mean, you can study a lot 4D geometry etc, you can answer questions about it, but none is able to (afaik) "see" 4D objects. We just don't have a 3D matrix of receptors that would be the analogue of 3D vision, nor the processing region for that.
The questions you asked are answerable in this framework I guess, although even adding a new dimension is less than getting a full new sense.
This audio is the most unintelligible block of sound I've ever heard, but it made my day much happier to know you can understand it and use it nicely.
I will try to use the HTML5 accessibility tags and attributes whenever I can from now on (I currently don't even know what is there about accessibility to be implemented).
What a nice chap. I like and adhere to the opinion prevalent in this thread that we should pay more attention to Accessibility. I personally find I'm conditioned to ignore people with different needs than me when I design products, and this is an eye-opening example.
> I therefore have to keep looking for tutorials, programs and tools that are accessible, and cannot simply pick up any off-the-shelf IDE.
Another advantage of the ruby community's general commitment to produce a language that can be written in any old text editor. I think Java long past that point, you really need an effective IDE with certain features that it knows about Java to be effective in Java.
Making sure things are still doable with a plain text editor gives developers a lot more options (including for developing new editing environment improvements), instead of locking them in to certain IDEs. A lot more options for accessibility reasons or any reasons.
I helped rewrite Narrator, the Windows built in screen reader, for Vista. After we had a basic version working I tried turning off my monitor and using it to write code. I gave up quickly.
Several years back I worked with a team that had a blind developer. The team was transitioning to Java. The blind developer told me in a meeting that she was having trouble doing something in Eclipse. I told her that I would go with her to her desk to help diagnose the problem. Watching a blind person use Eclipse with a screen reader was simultaneously awe-inspiring (the screen reader part) and horrifying (the Eclipse part). Needless to say, Eclipse was not well suited for the blind.
Please, elaborate on this. I tend to use Eclipse as my primary IDE and am wondering what the blind person had trouble with, what screenreader they were using and how long ago this happened.
This occurred in 2009 and unfortunately I don't recall any of the details. I only remember that it was running on Windows. I was blown away watching her use a screen reader for doing all sorts of things. I know that some of the developers on the team had originally worked with COBOL on mainframe or AS400. I don't know how well (if at all) 3270 or 5250 terminal emulators work with screen readers, but I strongly suspect that COBOL source code was easier to manage than Java in Eclipse.
I turned on OS X's built-in screen reader, then set the rate to 100 (the fastest it goes). It's still not as fast NVDA.
It's interesting that he's using Windows 8 and I'd have liked if he'd talked about that briefly. I'd always thought that Apple was way ahead of the other vendors on this accessibility, but perhaps with third-party software available on the desktop for screen reading that's not the case.
Even though the built-in screenreader mac OS X uses blows Narrator out of the water currently, it has a long way to go to compete with 3rd-party screenreaders on the Windows platform, especially when it comes to productivity and actually getting work done. The advantage of VoiceOver is that it is baked into the OS and therefore also the recovery media, making it far easier to reinstall the entire OS from scratch. Thiscan be done in Windows as well, but only recently were the tools created to actually allow this semi-reliably. On the mobile front, Apple does outperform the competition by a rather broad margin.
A decade ago, I got to visit one of the accessibility labs at Microsoft. I'm not terribly surprised that Visual Studio works well as there are ocularly impaired developers at the company. I had the opportunity to speak with those who ran the lab and observe how someone used these screen readers first hand.
A friend of mine is a blind developer. He built his own screenreader for Android ( https://play.google.com/store/apps/details?id=info.spielproj... ), because he was unhappy with the existing options. I haven't yet gotten over being amazed, but I always knew he was really bright, so I should just get over assuming not having sight would prevent him from being an effective developer.
He has ambitions to do hardware hacking, but has been thwarted by difficulties with identifying parts (i.e. resistors are color-coded), among other things. I've been meaning to sit down with him sometime and work on something.
The absolute limit of comprehension for sighted people is 10 syllables per second. Blind people, however, can comprehend speech sped up to 25 syllables per second.
Thank you so much for your insightful post and prospective.
I have a question on how you imagine/think. People with eye sigh often think with pictures. Even thinking about abstract things like programming. I often visualize how a data structure looks and how it interacts with other code. I have found it tremendous useful as I can replay/test such scenario in my head.
Do you have similar experience when you think? Do you construct mental picture (such as circle, a binary tree) in your head? What is it like?
that question is both very easy and incredibly difficult to answer. Yes, I visualize code the way you guys do, but not with pictures. Let us take a 2-dimentional array as an example. I tend to visualize this as a table. A table, for me, is a construct with columns and rows. Naturally, I do not have a spacial picture of this, the closest I can get is a tactile representation of either a chess board or a scrabble board. In both cases, a single array index becomes a tile, which makes me able to visualize it. But just like dreams, I would not use a visual frame of reference here because that simply isn't an option, it is a tactile representation that I somehow translate into a mental ...'picture' for lack of a better word. If you think that is tough, think of how incredibly tough it is for me to visualize CSS ;)
Thank you for answering my question. I think I now get a sense of thinking using tactile representation. I can imagine it is incredible hard to work on things that don't have 1-to-1 mapping between visual and tactile.
It is a fascinating topic. I hope to have a chance to understand how brain really works in my lifetime.
Related: BSDTalk episode with a blind BSD user[1]. It's about many things, not how blind people deal with computers, but it offers some great insights nevertheless. She finds VAXen much easier to deal with than PCs, and CLIs much more accessible than GUI screen readers.
I imagine a purely text-based terminal set up running on a specially crafted host OS/VM that does the text-to-speech would be a fantastic solution. You can browse the web, email, twitter! I'm not sure how CLI browsers handle JS?
If this existed would there any good reason to be using a GUI at all (for a Visually Impaired Person)?
I think Emacspeak is close to what your talking about.
http://emacspeak.sourceforge.net/
The reason I don't use it is because I could never get it to work quite right so just continued using Windows. It's a lot easier to use Windows, Eclipse, Outlook, Office, etc even if it is somewhat less productive then Emacspeak would be. The amount of time I'd have to spend working around how to integrate with my co-workers Windows based setups if I used Linux with no GUI makes it a non-starter.
Mainly if you are working with others, this approach would fall on its face rather quickly. People actually do this, doing all their work using emacs modes and Emacspeak. It can be done, but in this day and age it is not the most elegant solution especially with web 2.0-based websites
The only command-line screenreader-compatible browser I know of that also supports JavaScript is edbrowse, which is not so bad to use if you’re already familiar with ed.
Really curious about how a linux environment compares in terms of usability for the blind. zersiax, are you using windows primarily because of the toolchain you need for work / school, or did you find linux lacking in terms of its support for your needs?
Are there frameworks to automatically analyse html for accessibility and perhaps provide a certain rating based on set guidelines? I think that might be a very interesting project to work on if such a thing does not exist.
There are automated tools, but in my experience they mostly pick up the easy things (missing alt attributes, etc.) The harder bits are focus and notification in SPAs. Read the WCAG docs [1] and the and WAI ARIA docs [2] stick to them. Then test with a screenreader.
I actually had an idea for this. What if blind users could wear little braille terminals? Like, braile is constructed of a series of dots, so why not have like a bracer that has a array of dots that poke the skin?
it might work, but the nerves found in the fingertips are far more sensitive than anywhere else on the body. This might be trainable though, I don't know
The tongue is incredibly sensitive, I've seen some work on giving blind people the ability to have low-resolution vision by having an array of electrodes on the tongue (before the tongue, they used the back, but that's not anywhere near as sensitive) and a webcam.
Yes! I guess I'm experiencing an awe at symbolic prowess comparable to that of only-functionally-literate people staring for the first time at the syntax-colored/indented walls of text used by programmers.
I had to download the audio example and lower the playback speed to a third of what it was before it started making sense! It is a reading of an earlier draft of the first bit of the post, btw, so there are several divergences which make it even harder to try to match text to speech.
A housemate of mine works with a blind web dev. The only real problem she has with is making sure certain css divs appear correctly on the screen, which she overcomes by asking friends, etc if it looks correct.
It's a shame that so many programs don't follow the accessibility guidelines, but it's just too damn easy to forget about the disabled if you aren't. But this article was an eye-opener for me (no pun intended).
I'm installing a screen-reader today ... and I'm going to listen to everything I create on it (at a slower speed - at least to start). We have stringent accessibility guidelines, tools to help find accessibility problems and a person in charge of making sure we adhere to those principles, but wouldn't it be more real to close your eyes and experience your creation yourself?
Since he has a whole occipital lobe to devote to parsing that, I don't feel so bad that my auditory processing isn't quite up to snuff. Reminds me of speed reading, though. What is the wpm on that? Maybe I'll ask him on freenode.
Is there evidence to suggest that a brain is that plastic? Genuinely curious. I feel like the "blind people have super-powered other senses" is an urban legend, like the 10% of your brain misconception.
The brain seems to be pretty darn plastic. There have been experiments with giving people whole new sensory inputs, and finding that the brain adapted to them quite well. Two examples:
- A grid of electrodes on the tongue, activating with a pattern fed from a camera. After a while, blind experimental subjects reported actually seeing what the grid displayed.
- A belt of buzzers hooked to a compass. The buzzer closest to magnetic north was always active. After a couple months, subjects weren't really conscious of the buzzing, but got a really good mental map of their environment and their position in it. They could navigate unfamiliar environments much better than before...and then much worse, when the experiment ended.
There are also some well-known cases of blind people using echolocation pretty effectively, without any special hardware.
This American Life did an episode recently on Daniel Kish, one of the most famous echo location practitioners[0] and here's a short youtube video of the same guy.[1]
Echolocation is a skill that basically translates clicky sounds that bounce back into structural details of the obstruction the sound waves encounter. be it height, weight, distance and if you get good at it even material composition. Look at Dan Kish to see an example
This. Make any prediction of how advanced the brain is, then double that, at the very very least. It is barely short of magical. Even single neurons can do processing way more advanced than the artificial ones we use in machine learning. The rest of the body is honestly quite disappointing in comparison. Even the immune system and DNA/RNA mechanisms seem trivial and those are pretty dope compared to most other things.
Even single neurons can do processing way more advanced than the artificial ones we use in machine learning.
How does that work? A single neuron is a single neuron. My understanding is that the brain has lots of neurons and also they are assembled in certain "NN architectures" that are far more advanced than what we currently have. But I think that if you go the the single neuron level then they are pretty similar in terms of problem solving capabilities.
Well, a neuron can either fire or not fire; 0 or 1. For a neuron to fire, enough of its' synapses need to fire. How likely a single neuron is to do so based on an incoming action potential varies over time according to Hebbian learning. Once x synapses fire, the input is linearly summed to determine if the neuron as a whole fires. That's a neat abstraction and it is pretty descriptive. But it's also a little too neat to be true. The summation is, of course, not linear and neurons react to incoming signals even if they don't fire. How likely a neuron is to fire also depends on how much it has fired recently, as the synapses "get tired" (the concentration of certain molecules are temporarily exhausted), but for some neurons, likelihood increases as a result of previous firings before it decreases. Hebbian learning increases the strength of the connection to neurons that participates in successfully firing the neuron, but connections under a certain strength threshold decay with time, while those over it are stable (inability to create these stable connections is connected to alzheimers).
That's what I remember and I just took some neuroscience courses. There are multiple books out there attempting only to describe behaviour mathematically, let alone describing the underlying mechanisms, which is true "here be dragons" territory.
"Neuropsychologists compared the brain activity of people who can see and people who were born blind, discovering that the part of the brain that normally works with our eyes to process vision and space perception can actually rewire itself to process sound information instead."
That's a very interesting point! I would consider that a superpower. Would this also imply that humans are capabable of way more than one would normally think, given proper training?
It's just skimming. If you were reading a book, you would just scan across it to find what you want. You don't have to read everything on the page to know what it is talking about.
I would not say that. When you are skimming, you actually skip parts of the text. Since an audible text is a constant stream of data, just skipping parts would make it incomprehensible
In speed reading, you can read sentences backwards and the brain will reorder them in real time. You can learn to decouple image (symbol/word) acquisition from comprehension. Step one om this path is to stop subvocalization, saying words out loud in your head.
It's the same difference between Sync (O_DIRECT) and Async disk writes.
I would compare this to using something like Spritz (http://www.spritzinc.com/) for a sighted person. At first the higher speeds feel too fast but your brain quickly adapts, especially with daily practice. Given that he's used a screen-reader his entire life when dealing with computers, I'm sure his brain has deeply entrenched pathways for parsing audio like this.
I found that it was easier to catch a few words from that file when I closed my eyes to shut out other distractions. I still couldn't understand much, but I could see how there would be a progression to build to that level.
Indeed, I used NVDA for a while when I was working on accessibility for a web app and after a week I could turn it up pretty fast, but nowhere near that fast.
It might. You are putting your finger on a rather experimental way of browsing through code, something which Emacspeak currently does reliably as the only editor I am aware of. Rather than using sound fx, it uses difference in the TTSvoice to denote different syntax properties, kind of like audible syntax highlighting