From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from relay01.roc.ny.frontiernet.net ([66.133.182.164]) by speech.braille.uwo.ca with esmtp (Exim 3.36 #1 (Debian)) id 1DxmqS-0005QG-00 for ; Wed, 27 Jul 2005 10:25:16 -0400 Received: from filter07.roc.ny.frontiernet.net (filter07.roc.ny.frontiernet.net [66.133.183.74]) by relay01.roc.ny.frontiernet.net (Postfix) with ESMTP id E739A364361 for ; Wed, 27 Jul 2005 14:25:10 +0000 (UTC) Received: from relay01.roc.ny.frontiernet.net ([66.133.182.164]) by filter07.roc.ny.frontiernet.net (filter07.roc.ny.frontiernet.net [66.133.183.74]) (amavisd-new, port 10024) with LMTP id 10438-05-32 for ; Wed, 27 Jul 2005 14:25:10 +0000 (UTC) Received: from Scott.citlink.net (170-215-11-39.nas1.int.mn.frontiernet.net [170.215.11.39]) by relay01.roc.ny.frontiernet.net (Postfix) with ESMTP id 36B3D364371 for ; Wed, 27 Jul 2005 14:25:08 +0000 (UTC) Message-Id: <6.2.3.4.0.20050727092307.0372af40@pop3.citlink.net> X-Mailer: QUALCOMM Windows Eudora Version 6.2.3.4 Date: Wed, 27 Jul 2005 09:25:06 -0500 To: "Speakup is a screen review system for Linux." From: Scott Berry In-Reply-To: <20050727130524.GB7548@rednote.net> References: <584DE893B0E08F4B9748E295029F1E97022AEB19@maya.aztec.soft.net> <20050727034121.GA26853@taylor.homelinux.net> <20050727130524.GB7548@rednote.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed; x-avg-checked=avg-ok-56CD67DC X-Virus-Scanned: by amavisd-new-2.2.1 (20041222) at filter07.roc.ny.frontiernet.net Subject: Re: An idea, X-BeenThere: speakup@braille.uwo.ca X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "Speakup is a screen review system for Linux." List-Id: "Speakup is a screen review system for Linux." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Jul 2005 14:25:16 -0000 This really doesn't pertain to how Gnopernicus works. But one question I really would like to through around out here is why are the keystrokes so difficult. I know you can change key bindings in Gnopernicus but it looked like a guy with 20 years of experience would have to do it. Maybe this has changed in recent versions but I remember when I started using Gnopernicus wow! Difficult to learn. At 08:05 AM 7/27/2005, you wrote: >Hi, Lorenzo: > >Others have responded with reference to what an X server does, and >doesn't do. I want to respond to two other particular points from your >message. > >Lorenzo Taylor writes: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > ... Gnopernicus, for example, is using libraries that > > rely on certain information ent by the underlying application libraries. > > Unfortunately, this implementation causes only some apps to speak > while others > > which use the same widgets but whose libraries don't send messages to the > > accessibility system will not speak. > >This is only partially correct. Any applications using those "same >widgets," as you put it, will speak. There are no exceptions. > >What causes them to not speak is that the properties required to make >them speak have not been supplied. So, Gnopernicus is getting an empty >string to renderd, which I suppose it dutifully renders as silence. > >Fortunately, these are open source applications and we don't need an >advocacy campaign to resolve these kinds of problems. A solid example of >this at work is the Gnome Volume Control. It was written with gtk2, but >the developers did not supply all the relevant property data. So, a >blind programmer came along one weekend, fixed it, and submitted the >patch which has shipped with the rest of Gnome Volume Control ever >since. > >Now the next point ... > > > But it occurs to me that X is simply a > > protocol by which client applications send messages to a server > which renders > > the proper text, windows, buttons and other widgets on the > screen. I believe > > that a screen reader that is an extension to the X server itself, > (like Speakup > > is a set of patches to the kernel) would be a far better > solution, as it could > > capture everything sent to the server and correctly translate it > into humanly > > understandable speech output without relying on "accessibility > messages" being > > sent from the client apps. > > >As other have pointed out, there's nothing to be gained by speaking RGB >values at some particular X-Y mouse coordinate location. But, I'm sure >that's not what you really intend. If I interpret you correctly you're >suggesting some kind of mechanism whereby a widget of some kind can be >reliably identified and assigned values that the screen reader can >henceforth utter. This is the approach with Windows OSM that has been >used over the past decade, and it's what allows screen readers, like >JFW, to develop interfaces based on scripts. For instance, Take widget >number 38,492 and call it "volume slider," and speak it before anything >else on screen when it shows up on screen, and facilitate the method >that will allow user to use up and down arrow to change it's value, >etc., etc. > >It is arguable, and has been cogently argued over the past 18 months, >that the failure of the original Desktop Accessibility Architecture >promoted by Sun and Gnome was to not provide such mechanisms. A great >part of the intent of the Orca screen reader proof of concept was to >provide exactly this kind of functionality. I believe this is now being >addressed, though I'm not aware any code for newer Gnopernicus (or post >Gnopernicus) readers is yet released. However, I do fully expect that >Gnopernicus is not the last word in desktop screen readers. > > Janina > >_______________________________________________ >Speakup mailing list >Speakup@braille.uwo.ca >http://speech.braille.uwo.ca/mailman/listinfo/speakup