public inbox for blinux-list@redhat.com
 help / color / mirror / Atom feed
* Re: speech standard
@  Hans Zoebelein
   ` Buddy Brannan
   ` speech standard Kevin Forge
  0 siblings, 2 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

We shall bake small breads which fit into the oven.

Before creating a "speech standard" (do you think T.V. Raman will bend to
this 'standard'?)  we shall use the building blocks which are already here
and have proven their existence in real life.

Take the Emacspeak speech server as a basis for speech output. It supports 
a number of hardware synthesizers and even uses (still buggy) the Mbrola
software synthesizer. 

Write modules for other hardware synthesizers. Lets kill the damn bugs of
the mbrola-server which stalls from time to time and takes Emacspeak down
too sometimes. (The freephone text to phoneme translator
was buggy like a alley cat when I tried him out. Stupid things like buffer
overflows and so on shined up) Make this run solid.

Write a reasonable module which supports FESTIVAL speech synth for the 
Emacspeak speech server. The default FESTIVAL server option is a joke. If
there are twenty speech requests, they are not queued into one client but
twenty clients are opened at the same time. Now you have time to drink
a cup of coffee until speech comes back (_if_ it comes back) Not a
very intelligent solution which cries for improvement. 

Now when _this_ is done and all hardware synths including  mbrola and
festival are running nicely without bugs on the Emacspeak speech server,
then we can _think_ about, if we need another speech standard than this
which we have just have created. 

--Hans



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   speech standard Hans Zoebelein
@  ` Buddy Brannan
     ` Hans Zoebelein
   ` speech standard Kevin Forge
  1 sibling, 1 reply; 77+ messages in thread
From: Buddy Brannan @  UTC (permalink / raw)
  To: blinux-list

Ah, Hans, but here's the thing. Am I mistaken in the notion that the
Emacspeak speech server, like Emacspeak itself, requires Emacs? What about
those of us who just want to use a shell without Emacs? (BTW...I'll
probably get Emacspeak going once I get Linux up, but....using Emacspeak to
do an install seems a little impractical....where Brett's svlpro sounds
like it'd work, but you can't use the Emacspeak speech server for this, can
you?) So while Emacspeak's speech server works with a lot of stuff....it
doesn't seem very portable. Unless, of course, I'm mistaken, which wouldn't
surprise me.

--
Buddy Brannan
Internet America Customer Care Technician
Email: buddy.brannan@airmail.net
or davros@airmail.net
The 1-800-Be-A-Geek people :)
------Standard disclaimers re: personal opinions not reflecting the views
of my employer and all that apply here------


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   ` Buddy Brannan
@    ` Hans Zoebelein
       ` Buddy Brannan
                       ` (3 more replies)
  0 siblings, 4 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

Buddy, 

Emacspeak uses the speech server for speech output. 
As Emacspeak does it,  another application can use the speech server too.
Or you can run a shell and use a screen reader in this shell where the
screen reader speaks through the speech server. (But wait:
Isn't there Emacspeak + Eterm which _is_ a shell with a working screen
reader...) 

BTW the typical screen reader is _stupid_. It does not know
what you want to know and what not. So it babbles everything it can 
find on the screen without own intelligence. What will happen if it hits
multi column text? It will read line 1 of column 1 and then line 1 of
column 2. But that's not the way you want to read the article in parallel.
You want to have an intelligent tool that recognized: Oops! We have two
columns and reads first column one and then column two. (Emacspeak knows
this). 

Concerning the 'install phenomenon': Many people who are blind want to
have speech to 'install the OS' by their own. I think that this serves 
for them more as a proof of independence than as an indication, that
they have to install their OS on a daily basis. I personally install one
time about every every 3 years or so. So the install quote is about 0.091
percent of my total computer usage (if an install takes a whole day every
three years) 

But OK, if you really want this 'boot speech feature', you'll need a
device which plugs in like a video card and gives you output
from power on time. All other devices which hang on a serial or parallel
port cannot get output until the kernel is up. So you won't get 'power
on speech' but only 'booting speech'. But you have 'booting speech' anyway 
because Linux produces a nice file called 'dmsg' which holds the
messages produced during your last boot action. Do you really want to
hear such interesting stuff like this?

[begin of my /var/log/dmsg file]
c310
pcibios_init : BIOS32 Service Directory entry at 0xfc740
pcibios_init : PCI BIOS revision 2.00 entry at 0xfc770
Probing PCI hardware.
Calibrating delay loop.. ok - 49.87 BogoMIPS
Memory: 63156k/65536k available (616k kernel code, 384k reserved, 1216k
data)
This processor honours the WP bit even when in supervisor mode. Good.
Swansea University Computer Society NET3.035 for Linux 2.0
NET3: Unix domain sockets 0.13 for Linux NET3.035.
Swansea University Computer Society TCP/IP for NET3.034
IP Protocols: ICMP, UDP, TCP
VFS: Diskquotas version dquot_5.6.0 initialized
Checking 386/387 coupling... Ok, fpu using exception 16 error reporting.
Checking 'hlt' instruction... Ok.
Linux version 2.0.33 (root@goldfish) (gcc version 2.7.2.3) #17 Tue Jun 30
18:35:
05 CEST 1998
Starting kswapd v 1.4.2.2
Real Time Clock Driver v1.07
Ramdisk driver initialized : 16 ramdisks of 4096K size
Floppy drive(s): fd0 is 1.44M, fd1 is 1.2M                       
FDC 0 is a National Semiconductor PC87306
md driver 0.35 MAX_MD_DEV=4, MAX_REAL=8
ncr53c8xx: at PCI bus 0, device 1, function 0
ncr53c8xx: 53c810 detected
ncr53c810-0: rev=0x01, base=0xfbfef000, io_port=0xe800, irq=9
ncr53c810-0: ID 7, Fast-10, Parity Checking
ncr53c810-0: restart (scsi reset).
scsi0 : ncr53c8xx - revision 2.4a
scsi : 1 host.
ncr53c810-0-<0,0>: using tagged command queueing, up to 4 cmds/lun
  Vendor: SEAGATE   Model: ST12400N          Rev: 8580
  Type:   Direct-Access                      ANSI SCSI revision: 02
Detected scsi disk sda at scsi0, channel 0, id 0, lun 0
ncr53c810-0-<1,0>: using tagged command queueing, up to 4 cmds/lun    
..... and so on and so on....
[most part of dmsg cut]

I never look at this file. Would you like to hear this stuff every
time your box is booting? I doubt it.

And concerning 'portability': this speech server should be runnable on
every UNIX machine under  the sun (sic!) and anywhere else. Trusted
sources told me even about Emacspeak on Windows NT. And I was
informed that Emacspeak runs even on the ITSY (a small device smaller than
a Palm Pilot but with PC performance under Linux. Do we need more
portability? What about a Java version of Emacspeak? Shouldn't be
too difficult. So we better forget the portability argument.

So I think you are mistaken here.

--Hans 


On Thu, 11 Mar 1999, Buddy Brannan wrote:

> Ah, Hans, but here's the thing. Am I mistaken in the notion that the
> Emacspeak speech server, like Emacspeak itself, requires Emacs? What about
> those of us who just want to use a shell without Emacs? (BTW...I'll
> probably get Emacspeak going once I get Linux up, but....using Emacspeak to
> do an install seems a little impractical....where Brett's svlpro sounds
> like it'd work, but you can't use the Emacspeak speech server for this, can
> you?) So while Emacspeak's speech server works with a lot of stuff....it
> doesn't seem very portable. Unless, of course, I'm mistaken, which wouldn't
> surprise me.
> 
> --
> Buddy Brannan
> Internet America Customer Care Technician
> Email: buddy.brannan@airmail.net
> or davros@airmail.net
> The 1-800-Be-A-Geek people :)
> ------Standard disclaimers re: personal opinions not reflecting the views
> of my employer and all that apply here------


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
     ` Hans Zoebelein
@      ` Buddy Brannan
       ` Whistler
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 77+ messages in thread
From: Buddy Brannan @  UTC (permalink / raw)
  To: blinux-list

Hi Hans,

Thanks for clearing this misconception of mine up. While you're correct
that I don't want to necessarily hear all the boot messages whenever I boot
my machine, it would be handy to be able to run an install with minimal to
no assistance--say, I want to install my system but there's not anyone
conveniently available at the time? It happens occasionally. But that
aside, I'm glad to know that my concerns over the Emacspeak speech server
are unfounded. That indeed is good news, and thank you for straightening me
out on this point.

Now, as for the Tiny. ... I'd like a bit more info on this! I've been
thinking it's high time I get ahold of a portable notetaker or text reader,
for of course leaving notes to myself, reading text but not being leashed
to a computer, and so on. Would this device then use Emacspeak with Mbrola?
Any projections on when it will be available and how/when/how much for it
can be obtained, or at least, a URL to get more info on it?

Thanks. And of course, my comments weren't meant as anything more than
questions, which I'm extremely glad to have had answered.

--
Buddy Brannan
Internet America Customer Care Technician
Email: buddy.brannan@airmail.net
or davros@airmail.net
The 1-800-Be-A-Geek people :)
------Standard disclaimers re: personal opinions not reflecting the views
of my employer and all that apply here------


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
     ` Hans Zoebelein
       ` Buddy Brannan
@      ` Whistler
       ` Whistler
       ` Nolan Darilek
  3 siblings, 0 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

I don't know what speech program you have been using but I do not cnsider
my Dos screen reader as stupid you are correct there are times I have to
make a set file or two to get it to do exactly what I want but I have no
trouble running most dos aplications and As for windows 95 my speech again
is very intelegent and I can use the programs needed in real world
environments right now.  Currently I am working on a team of software
developeers and their documentation is all done in Word while I like
Emacspeak for many aplications I don't see a way I am going to be able to
use Star Office through it so I can interact in the manor I need to.  Yes
Emacspeak can do almost everything but it can not convert and read a word/
word perfect document like it should.  Where as Star Office for Xwin can.
I like all of Ramans ideas but unless the entire software world buy's into
it it limits us to his world. 

Let me give you an example I am the owner and coder for VME mud Engines I
work witha  team of sighted programers  Ihave to use a mud client that can
interact quickly with other coders and players.  I have to be able to use
pine because currently the attachment feature on emacs has not impressed
me.  When I use Pine emacs turns into a stupider speech software than
anything I have ever used before.  If I want to use a client to get onto
the interactive server again emacspeak loses out.  I have even tryed that
moo client for emacs and um while I am glad the author is working on it
and maybe it will be a solution some day it is not wrking half as good as
my "stupid dos"  screen reader and any linux client networked by ethernet.  
I use emacspeak mainly for coding cause that is where it has power.  I
know Ranman will argue that and I am sure if I coded it and was an expert
on it I could do anything with it but I can't and don't see my self as a
blind user ever being happy with Emacspeak as a total speech package.  

Gnome on the other hand is starting to show promiss and I can't wait to
see a speech solution for it and at the command line I would like to see
Speak-up and Svlpro come to the level of Dos screen readers then I could
ditch this extra screen reading box and move into the Linux environment
with out wondering why everything has to move so slow when I use the eterm
package to work on another server.  

Emacspeak has its place and it is a great solution to get you into and in
some cases keep you in Linux.  It is not the end all and I am tired of
people calling the old Speech programs stupid.

Asap alone works better networked with ethernet at doing many tasks than
Emacspeak and I speak from experince.  The only problem is that will not
work when I am out with a laptop so I have to use Emacspeak now till
Speak-up or Svlpro are a more coplete AI solution.

Raman has some great ideas in his book and his Web page but something to
think about is just because it is a great idea does that mean everyone
will do it?  I will wait for everyone to do it and until then I will
strongly support any development like Speak-up and SvlPro while using my
emacspeak in the mean time.

Ken /whistler

On Thu, 11 Mar 1999, Hans Zoebelein wrote:

> Buddy, 
> 
> Emacspeak uses the speech server for speech output. 
> As Emacspeak does it,  another application can use the speech server too.
> Or you can run a shell and use a screen reader in this shell where the
> screen reader speaks through the speech server. (But wait:
> Isn't there Emacspeak + Eterm which _is_ a shell with a working screen
> reader...) 
> 
> BTW the typical screen reader is _stupid_. It does not know
> what you want to know and what not. So it babbles everything it can 
> find on the screen without own intelligence. What will happen if it hits
> multi column text? It will read line 1 of column 1 and then line 1 of
> column 2. But that's not the way you want to read the article in parallel.
> You want to have an intelligent tool that recognized: Oops! We have two
> columns and reads first column one and then column two. (Emacspeak knows
> this). 
> 
> Concerning the 'install phenomenon': Many people who are blind want to
> have speech to 'install the OS' by their own. I think that this serves 
> for them more as a proof of independence than as an indication, that
> they have to install their OS on a daily basis. I personally install one
> time about every every 3 years or so. So the install quote is about 0.091
> percent of my total computer usage (if an install takes a whole day every
> three years) 
> 
> But OK, if you really want this 'boot speech feature', you'll need a
> device which plugs in like a video card and gives you output
> from power on time. All other devices which hang on a serial or parallel
> port cannot get output until the kernel is up. So you won't get 'power
> on speech' but only 'booting speech'. But you have 'booting speech' anyway 
> because Linux produces a nice file called 'dmsg' which holds the
> messages produced during your last boot action. Do you really want to
> hear such interesting stuff like this?
> 
> [begin of my /var/log/dmsg file]
> c310
> pcibios_init : BIOS32 Service Directory entry at 0xfc740
> pcibios_init : PCI BIOS revision 2.00 entry at 0xfc770
> Probing PCI hardware.
> Calibrating delay loop.. ok - 49.87 BogoMIPS
> Memory: 63156k/65536k available (616k kernel code, 384k reserved, 1216k
> data)
> This processor honours the WP bit even when in supervisor mode. Good.
> Swansea University Computer Society NET3.035 for Linux 2.0
> NET3: Unix domain sockets 0.13 for Linux NET3.035.
> Swansea University Computer Society TCP/IP for NET3.034
> IP Protocols: ICMP, UDP, TCP
> VFS: Diskquotas version dquot_5.6.0 initialized
> Checking 386/387 coupling... Ok, fpu using exception 16 error reporting.
> Checking 'hlt' instruction... Ok.
> Linux version 2.0.33 (root@goldfish) (gcc version 2.7.2.3) #17 Tue Jun 30
> 18:35:
> 05 CEST 1998
> Starting kswapd v 1.4.2.2
> Real Time Clock Driver v1.07
> Ramdisk driver initialized : 16 ramdisks of 4096K size
> Floppy drive(s): fd0 is 1.44M, fd1 is 1.2M                       
> FDC 0 is a National Semiconductor PC87306
> md driver 0.35 MAX_MD_DEV=4, MAX_REAL=8
> ncr53c8xx: at PCI bus 0, device 1, function 0
> ncr53c8xx: 53c810 detected
> ncr53c810-0: rev=0x01, base=0xfbfef000, io_port=0xe800, irq=9
> ncr53c810-0: ID 7, Fast-10, Parity Checking
> ncr53c810-0: restart (scsi reset).
> scsi0 : ncr53c8xx - revision 2.4a
> scsi : 1 host.
> ncr53c810-0-<0,0>: using tagged command queueing, up to 4 cmds/lun
>   Vendor: SEAGATE   Model: ST12400N          Rev: 8580
>   Type:   Direct-Access                      ANSI SCSI revision: 02
> Detected scsi disk sda at scsi0, channel 0, id 0, lun 0
> ncr53c810-0-<1,0>: using tagged command queueing, up to 4 cmds/lun    
> ..... and so on and so on....
> [most part of dmsg cut]
> 
> I never look at this file. Would you like to hear this stuff every
> time your box is booting? I doubt it.
> 
> And concerning 'portability': this speech server should be runnable on
> every UNIX machine under  the sun (sic!) and anywhere else. Trusted
> sources told me even about Emacspeak on Windows NT. And I was
> informed that Emacspeak runs even on the ITSY (a small device smaller than
> a Palm Pilot but with PC performance under Linux. Do we need more
> portability? What about a Java version of Emacspeak? Shouldn't be
> too difficult. So we better forget the portability argument.
> 
> So I think you are mistaken here.
> 
> --Hans 
> 
> 
> On Thu, 11 Mar 1999, Buddy Brannan wrote:
> 
> > Ah, Hans, but here's the thing. Am I mistaken in the notion that the
> > Emacspeak speech server, like Emacspeak itself, requires Emacs? What about
> > those of us who just want to use a shell without Emacs? (BTW...I'll
> > probably get Emacspeak going once I get Linux up, but....using Emacspeak to
> > do an install seems a little impractical....where Brett's svlpro sounds
> > like it'd work, but you can't use the Emacspeak speech server for this, can
> > you?) So while Emacspeak's speech server works with a lot of stuff....it
> > doesn't seem very portable. Unless, of course, I'm mistaken, which wouldn't
> > surprise me.
> > 
> > --
> > Buddy Brannan
> > Internet America Customer Care Technician
> > Email: buddy.brannan@airmail.net
> > or davros@airmail.net
> > The 1-800-Be-A-Geek people :)
> > ------Standard disclaimers re: personal opinions not reflecting the views
> > of my employer and all that apply here------
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
     ` Hans Zoebelein
       ` Buddy Brannan
       ` Whistler
@      ` Whistler
         ` Whistler
       ` Nolan Darilek
  3 siblings, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list



On Thu, 11 Mar 1999, Hans Zoebelein wrote:

> 
> Concerning the 'install phenomenon': Many people who are blind want to
> have speech to 'install the OS' by their own. I think that this serves 
> for them more as a proof of independence than as an indication, that
> they have to install their OS on a daily basis. I personally install one
> time about every every 3 years or so. So the install quote is about 0.091
> percent of my total computer usage (if an install takes a whole day every
> three years) 




Nice I have had to have machines installed 14 times this year and I
unlike you am not a single machine user I adminester many machines and I
would give my left nut to be able to install Linux with out having some
sighted guy sitting there going do youw ant this do you want that.  I
know of the systems that allow you to install through a serial port but
that personally sucks.  I have had no real  perfect install with that
method and until I can install Linux as easy as I could install MsDos I
will not be pleased.  


Ken /whistler> > But OK, if you really want this 'boot speech
feature', you'll
need a > device which plugs in like a video card and gives you output
> from power on time. All other devices which hang on a serial or parallel
> port cannot get output until the kernel is up. So you won't get 'power
> on speech' but only 'booting speech'. But you have 'booting speech' anyway 
> because Linux produces a nice file called 'dmsg' which holds the
> messages produced during your last boot action. Do you really want to
> hear such interesting stuff like this?
> 
> [begin of my /var/log/dmsg file]
> c310
> pcibios_init : BIOS32 Service Directory entry at 0xfc740
> pcibios_init : PCI BIOS revision 2.00 entry at 0xfc770
> Probing PCI hardware.
> Calibrating delay loop.. ok - 49.87 BogoMIPS
> Memory: 63156k/65536k available (616k kernel code, 384k reserved, 1216k
> data)
> This processor honours the WP bit even when in supervisor mode. Good.
> Swansea University Computer Society NET3.035 for Linux 2.0
> NET3: Unix domain sockets 0.13 for Linux NET3.035.
> Swansea University Computer Society TCP/IP for NET3.034
> IP Protocols: ICMP, UDP, TCP
> VFS: Diskquotas version dquot_5.6.0 initialized
> Checking 386/387 coupling... Ok, fpu using exception 16 error reporting.
> Checking 'hlt' instruction... Ok.
> Linux version 2.0.33 (root@goldfish) (gcc version 2.7.2.3) #17 Tue Jun 30
> 18:35:
> 05 CEST 1998
> Starting kswapd v 1.4.2.2
> Real Time Clock Driver v1.07
> Ramdisk driver initialized : 16 ramdisks of 4096K size
> Floppy drive(s): fd0 is 1.44M, fd1 is 1.2M                       
> FDC 0 is a National Semiconductor PC87306
> md driver 0.35 MAX_MD_DEV=4, MAX_REAL=8
> ncr53c8xx: at PCI bus 0, device 1, function 0
> ncr53c8xx: 53c810 detected
> ncr53c810-0: rev=0x01, base=0xfbfef000, io_port=0xe800, irq=9
> ncr53c810-0: ID 7, Fast-10, Parity Checking
> ncr53c810-0: restart (scsi reset).
> scsi0 : ncr53c8xx - revision 2.4a
> scsi : 1 host.
> ncr53c810-0-<0,0>: using tagged command queueing, up to 4 cmds/lun
>   Vendor: SEAGATE   Model: ST12400N          Rev: 8580
>   Type:   Direct-Access                      ANSI SCSI revision: 02
> Detected scsi disk sda at scsi0, channel 0, id 0, lun 0
> ncr53c810-0-<1,0>: using tagged command queueing, up to 4 cmds/lun    
> ..... and so on and so on....
> [most part of dmsg cut]
> 
> I never look at this file. Would you like to hear this stuff every
> time your box is booting? I doubt it.
> 
> And concerning 'portability': this speech server should be runnable on
> every UNIX machine under  the sun (sic!) and anywhere else. Trusted
> sources told me even about Emacspeak on Windows NT. And I was
> informed that Emacspeak runs even on the ITSY (a small device smaller than
> a Palm Pilot but with PC performance under Linux. Do we need more
> portability? What about a Java version of Emacspeak? Shouldn't be
> too difficult. So we better forget the portability argument.
> 
> So I think you are mistaken here.
> 
> --Hans 
> 
> 
> On Thu, 11 Mar 1999, Buddy Brannan wrote:
> 
> > Ah, Hans, but here's the thing. Am I mistaken in the notion that the
> > Emacspeak speech server, like Emacspeak itself, requires Emacs? What about
> > those of us who just want to use a shell without Emacs? (BTW...I'll
> > probably get Emacspeak going once I get Linux up, but....using Emacspeak to
> > do an install seems a little impractical....where Brett's svlpro sounds
> > like it'd work, but you can't use the Emacspeak speech server for this, can
> > you?) So while Emacspeak's speech server works with a lot of stuff....it
> > doesn't seem very portable. Unless, of course, I'm mistaken, which wouldn't
> > surprise me.
> > 
> > --
> > Buddy Brannan
> > Internet America Customer Care Technician
> > Email: buddy.brannan@airmail.net
> > or davros@airmail.net
> > The 1-800-Be-A-Geek people :)
> > ------Standard disclaimers re: personal opinions not reflecting the views
> > of my employer and all that apply here------
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   speech standard Hans Zoebelein
   ` Buddy Brannan
@  ` Kevin Forge
     ` Charles McCathieNevile
  1 sibling, 1 reply; 77+ messages in thread
From: Kevin Forge @  UTC (permalink / raw)
  To: blinux-list

A man after Mr. Torvalds' own Hart :)

I very much agree.  Get Emacspeak to work perfectly then look at 
defining that as a standard.  One thing to remember is that Linux 
and OSS programers code what they consider "fun", what will get
them some fame or most importantly things that they need personally.

This is therefore a call to arms for all the blind programers on 
this list.  And all the programers with Blind friends and Relatives,
And all the programers who want to be in the hall of fame with 
Larry, Alan, Mathias and Miguel.

Yep.  I couldn't program my way out of a wet paper bag.  I am a 
hardware man, but I do know how to make noise and call decent
programers.  I.e. most don't know that blinux exists and I didn't
even think to look until I got close to Wendy and started chatting
with her friends.  ( Am I the only one that think it's weird how 
most blind people have lots of blind friends and little interruption
with sited people ?  )


Hans Zoebelein wrote:
> 
> We shall bake small breads which fit into the oven.
> 
> Before creating a "speech standard" (do you think T.V. Raman will bend to
> this 'standard'?)  we shall use the building blocks which are already here
> and have proven their existence in real life.
> 
> Take the Emacspeak speech server as a basis for speech output. It supports
> a number of hardware synthesizers and even uses (still buggy) the Mbrola
> software synthesizer.
> 
> Write modules for other hardware synthesizers. Lets kill the damn bugs of
> the mbrola-server which stalls from time to time and takes Emacspeak down
> too sometimes. (The freephone text to phoneme translator
> was buggy like a alley cat when I tried him out. Stupid things like buffer
> overflows and so on shined up) Make this run solid.
> 
> Write a reasonable module which supports FESTIVAL speech synth for the
> Emacspeak speech server. The default FESTIVAL server option is a joke. If
> there are twenty speech requests, they are not queued into one client but
> twenty clients are opened at the same time. Now you have time to drink
> a cup of coffee until speech comes back (_if_ it comes back) Not a
> very intelligent solution which cries for improvement.
> 
> Now when _this_ is done and all hardware synths including  mbrola and
> festival are running nicely without bugs on the Emacspeak speech server,
> then we can _think_ about, if we need another speech standard than this
> which we have just have created.
>


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   ` speech standard Kevin Forge
@    ` Charles McCathieNevile
  0 siblings, 0 replies; 77+ messages in thread
From: Charles McCathieNevile @  UTC (permalink / raw)
  To: Kevin Forge; +Cc: blinux-list

Actually Kevin,

getting out of the papaer bag is a hardware problem. (smile)

Re your other message, emacspeak uses Aural CSS2 stylesheets (which drew
extensivly on the work of some guy called TVRaman (grin)) to give things
different voices. (CSS2 is at http://www.w3.org/TR/REC-CSS2 if you want to
check it out)

Cheers

Charles

On Fri, 12 Mar 1999, Kevin Forge wrote:

  A man after Mr. Torvalds' own Hart :)
  
  I very much agree.  Get Emacspeak to work perfectly then look at 
  defining that as a standard.  One thing to remember is that Linux 
  and OSS programers code what they consider "fun", what will get
  them some fame or most importantly things that they need personally.
  
  This is therefore a call to arms for all the blind programers on 
  this list.  And all the programers with Blind friends and Relatives,
  And all the programers who want to be in the hall of fame with 
  Larry, Alan, Mathias and Miguel.
  
  Yep.  I couldn't program my way out of a wet paper bag.  I am a 
  hardware man, but I do know how to make noise and call decent
  programers.  I.e. most don't know that blinux exists and I didn't
  even think to look until I got close to Wendy and started chatting
  with her friends.  ( Am I the only one that think it's weird how 
  most blind people have lots of blind friends and little interruption
  with sited people ?  )
  
  
  Hans Zoebelein wrote:
  > 
  > We shall bake small breads which fit into the oven.
  > 
  > Before creating a "speech standard" (do you think T.V. Raman will bend to
  > this 'standard'?)  we shall use the building blocks which are already here
  > and have proven their existence in real life.
  > 
  > Take the Emacspeak speech server as a basis for speech output. It supports
  > a number of hardware synthesizers and even uses (still buggy) the Mbrola
  > software synthesizer.
  > 
  > Write modules for other hardware synthesizers. Lets kill the damn bugs of
  > the mbrola-server which stalls from time to time and takes Emacspeak down
  > too sometimes. (The freephone text to phoneme translator
  > was buggy like a alley cat when I tried him out. Stupid things like buffer
  > overflows and so on shined up) Make this run solid.
  > 
  > Write a reasonable module which supports FESTIVAL speech synth for the
  > Emacspeak speech server. The default FESTIVAL server option is a joke. If
  > there are twenty speech requests, they are not queued into one client but
  > twenty clients are opened at the same time. Now you have time to drink
  > a cup of coffee until speech comes back (_if_ it comes back) Not a
  > very intelligent solution which cries for improvement.
  > 
  > Now when _this_ is done and all hardware synths including  mbrola and
  > festival are running nicely without bugs on the Emacspeak speech server,
  > then we can _think_ about, if we need another speech standard than this
  > which we have just have created.
  >
  
  ---
  Send your message for blinux-list to blinux-list@redhat.com
  Blinux software archive at ftp://leb.net/pub/blinux
  Blinux web page at http://leb.net/blinux
  To unsubscribe send mail to blinux-list-request@redhat.com
  with subject line: unsubscribe
  

--Charles McCathieNevile            mailto:charles@w3.org
phone: +1 617 258 0992   http://www.w3.org/People/Charles
W3C Web Accessibility Initiative    http://www.w3.org/WAI
MIT/LCS  -  545 Technology sq., Cambridge MA, 02139,  USA


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
     ` Hans Zoebelein
                       ` (2 preceding siblings ...)
       ` Whistler
@      ` Nolan Darilek
         ` Hans Zoebelein
  3 siblings, 1 reply; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=NIL, Size: 7522 bytes --]

   Resent-Cc: recipient list not shown: ;
   MBOX-Line: From blinux-list-request@redhat.com  Thu Mar 11 16:29:58 1999
   MBOX-Line: From foo@bar  Thu Mar 11 22:25:07 1999
   Date: Thu, 11 Mar 1999 22:25:04 +0100 (CET)
   From: Hans Zoebelein <zocki@goldfish.cube.net>
   Resent-From: blinux-list@redhat.com
   Reply-To: blinux-list@redhat.com
   Precedence: list
   Resent-Sender: blinux-list-request@redhat.com

   Isn't there Emacspeak + Eterm which _is_ a shell with a working screen
   reader...) 

Yes. But, try to use Pine, elm, play sail nethack or rogue, and the
"screen reader" becomes virtually unusable. Unless, of course, you like
spending time in review mode, arrowing around constantly.

While I agree that the typical screen reader doesn't work very well
out-of-the-box, I've been toying with the idea (though I'll probably
be unable to find the time for it) of a "smart" screen reader. Maybe
something which incorporates a scripting language, and allows you to
extend its functionality by tailoring it to your specific application.

"But emacspeak already does this," you say. Well, I personally have
found Elisp very difficult. And, since I'm currently actively working
on a C++ coding project, and have dabbled extensively in C and Python,
my statement isn't made out of a lack of experience. I've tried to
code Emacs/Emacspeak extensions for sail, Micq and even gnuchess, all
of which flopped after several weeks of work with the Elisp tutorial,
lots of consultations with c-h f, examinations of other code snippets,
etc. Using something like Python would grant those of us who prefer
imperative languages over functional ones to experience the same
freedoms which some of you claim to have attained with Elisp.

   BTW the typical screen reader is _stupid_. It does not know
   what you want to know and what not. So it babbles everything it can 
   find on the screen without own intelligence. What will happen if it hits
   multi column text? It will read line 1 of column 1 and then line 1 of
   column 2. But that's not the way you want to read the article in parallel.
   You want to have an intelligent tool that recognized: Oops! We have two
   columns and reads first column one and then column two. (Emacspeak knows
   this). 

The typical screen reader doesn't require megs of space and memory, though.

   Concerning the 'install phenomenon': Many people who are blind want to
   have speech to 'install the OS' by their own. I think that this serves 
   for them more as a proof of independence than as an indication, that
   they have to install their OS on a daily basis. I personally install one
   time about every every 3 years or so. So the install quote is about 0.091
   percent of my total computer usage (if an install takes a whole day every
   three years) 

I'm currently a computer science major. If I don't choose to delve
into AI research, I may very well become a consultant. In that case,
my JOB may depend upon installing Linux on a daily basis. Why should a
company pay me to have someone glance over my shoulder and walk me
through the installation process? Furthermore, Emacspeak depends on
the fact that lots of system components integrate seamlessly with each
other. I had a bit of a mess one night when I wanted to upgrade my
system. I've heard the argument that upgrading isn't a necessity, that
you don't need to ride the bleeding edge, but that doesn't fly with
me. If I want to install the latest kernels, the latest versions of
emacs, etc. then I should be able to, and no one should tell me I
can't, but that's besides the point. :) Anyhow, I upgraded emacs from
20.1 to 20.3 using Debian's packages, and suddenly emacspeak wouldn't
speak. It took me two hours to fix, because I had to constantly
'command >/dev/ttyS2', moving at a horribly slow pace. Eventually I
located and fixed a broken symlink, and it worked perfectly. And no,
there was no one available to help me; I had to either solve the
problem while leaving enough time to finish an assignment which was
due the next day, or wait until that assignment was past-due when I
could locate someone to help. Anyhow, while Emacspeak is certainly
great, it doesn't work well in situations where one of Emacs' many
components doesn't function, or in a situation during which you can't
seem to get to a login prompt, which HAS happened to me before.

   But OK, if you really want this 'boot speech feature', you'll need a
   device which plugs in like a video card and gives you output
   from power on time. All other devices which hang on a serial or parallel
   port cannot get output until the kernel is up. So you won't get 'power
   on speech' but only 'booting speech'.

I thought there was a change in the 2.2 series of kernels which could
 send kernel output to a serial port? I see what you're getting at,
 but I don't see its relevance. I think that what is being asked for
 here is simply a traditional screen reader. As one user (Ken?)
 contends, the Emacspeak solution is great, assuming everyone adopts
 the various interface ideas which Emacspeak utilizes. Try to use an
 application which doesn't fit within this framework, and you're on
 your own. I, personally, need a good web browser which supports
 secure transactions; W3 apparently does, but I've been trying to
 request support and assistance on the various W3 lists with no
 results. Even a direct email to the author sent several days ago
 hasn't been answered. So, yes, the ideas are great. But,
 realistically, I don't see the software world suddenly making the
 necessary shifts. A smart screen reader which allows users to
 customize the speech interface while not demanding the large overhead
 of emacs would grant more freedom in the ability to pick and choose
 applications without the need for going into review mode and reading
 one character at a time.

In summary, Emacspeak is a good thing(tm). :) But, it does have its
shortcomings. It also seems to require the user to simply adopt
certain beliefs (No frequent upgrading, no re-installing) which are
pointless and restrictive. I think that what's being proposed here is
not a chatterbox which speaks constantly, but a small, light-weight
program which utilizes a standard interface to provide speech in the
situations where Emacspeak would not be useful, or to even serve as a
complete Emacspeak replacement for those who don't want to adopt the
complete Emacs way of life. I don't see this as an unreasonable
request and, if I had more time on my hands, I'd be more than happy to
put some of my ideas into code. (Note that I'm not saying I won't, but
well ... Who knows :)

See 'ya, and have a great spring break for any of you college students
out there who, like me, have an awesome spring break coming up soon!
:)

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
              	"Are you sure you're not an encyclopedia salesman?"
            	No, Ma'am.  Just a burglar, come to ransack the flat."
                               		-- Monty Python


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
       ` Nolan Darilek
@        ` Hans Zoebelein
           ` Nolan Darilek
  0 siblings, 1 reply; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

On 12 Mar 1999, Nolan Darilek wrote:

>    Isn't there Emacspeak + Eterm which _is_ a shell with a working screen
>    reader...) 
> 
> Yes. But, try to use Pine, elm, play sail nethack or rogue, and the
> "screen reader" becomes virtually unusable. Unless, of course, you like
> spending time in review mode, arrowing around constantly.

Does any screen reader solve this problem? Don't use pine but the 
mail stuff which comes with Emacs. With Emacspeak you even can play
Tetris. Try this with a screen reader (good luck :).

> 
> While I agree that the typical screen reader doesn't work very well
> out-of-the-box, I've been toying with the idea (though I'll probably
> be unable to find the time for it) of a "smart" screen reader. Maybe
> something which incorporates a scripting language, and allows you to
> extend its functionality by tailoring it to your specific application.
> 
> "But emacspeak already does this," you say. Well, I personally have
> found Elisp very difficult. And, since I'm currently actively working
> on a C++ coding project, and have dabbled extensively in C and Python,
> my statement isn't made out of a lack of experience. I've tried to
> code Emacs/Emacspeak extensions for sail, Micq and even gnuchess, all
> of which flopped after several weeks of work with the Elisp tutorial,
> lots of consultations with c-h f, examinations of other code snippets,
> etc. Using something like Python would grant those of us who prefer
> imperative languages over functional ones to experience the same
> freedoms which some of you claim to have attained with Elisp.

Then you have to use a screen reader which uses C++...


> 
>    BTW the typical screen reader is _stupid_. It does not know
>    what you want to know and what not. So it babbles everything it can 
>    find on the screen without own intelligence. What will happen if it hits
>    multi column text? It will read line 1 of column 1 and then line 1 of
>    column 2. But that's not the way you want to read the article in parallel.
>    You want to have an intelligent tool that recognized: Oops! We have two
>    columns and reads first column one and then column two. (Emacspeak knows
>    this). 
> 
> The typical screen reader doesn't require megs of space and memory, though.

Then youn should go back to good old DOS and be happy with config.sys
and autoexec.bat. There are a lot of tools to get the last free himem
kilobyte out of your box  Btw. how many megs might a
screen reader need which runs under another OS like MS Windows?


> 
>    Concerning the 'install phenomenon': Many people who are blind want to
>    have speech to 'install the OS' by their own. I think that this serves 
>    for them more as a proof of independence than as an indication, that
>    they have to install their OS on a daily basis. I personally install one
>    time about every every 3 years or so. So the install quote is about 0.091
>    percent of my total computer usage (if an install takes a whole day every
>    three years) 
> 
> I'm currently a computer science major. If I don't choose to delve
> into AI research, I may very well become a consultant. In that case,
> my JOB may depend upon installing Linux on a daily basis. Why should a
> company pay me to have someone glance over my shoulder and walk me
> through the installation process? Furthermore, Emacspeak depends on
> the fact that lots of system components integrate seamlessly with each
> other. I had a bit of a mess one night when I wanted to upgrade my
> system. I've heard the argument that upgrading isn't a necessity, that
> you don't need to ride the bleeding edge, but that doesn't fly with
> me. If I want to install the latest kernels, the latest versions of
> emacs, etc. then I should be able to, and no one should tell me I
> can't, but that's besides the point. :) Anyhow, I upgraded emacs from
> 20.1 to 20.3 using Debian's packages, and suddenly emacspeak wouldn't
> speak. It took me two hours to fix, because I had to constantly
> 'command >/dev/ttyS2', moving at a horribly slow pace. Eventually I
> located and fixed a broken symlink, and it worked perfectly. And no,
> there was no one available to help me; I had to either solve the
> problem while leaving enough time to finish an assignment which was
> due the next day, or wait until that assignment was past-due when I
> could locate someone to help. Anyhow, while Emacspeak is certainly
> great, it doesn't work well in situations where one of Emacs' many
> components doesn't function, or in a situation during which you can't
> seem to get to a login prompt, which HAS happened to me before.

What about using a hardware braille line? Or you may use a hardware
speech synth which uses the serial option of the 2.2 kernel. But I doubt
that you will use this tool for after boot work.
> 
>    But OK, if you really want this 'boot speech feature', you'll need a
>    device which plugs in like a video card and gives you output
>    from power on time. All other devices which hang on a serial or parallel
>    port cannot get output until the kernel is up. So you won't get 'power
>    on speech' but only 'booting speech'.
> 
> I thought there was a change in the 2.2 series of kernels which could
>  send kernel output to a serial port? I see what you're getting at,
>  but I don't see its relevance. I think that what is being asked for
>  here is simply a traditional screen reader. As one user (Ken?)
>  contends, the Emacspeak solution is great, assuming everyone adopts
>  the various interface ideas which Emacspeak utilizes. Try to use an
>  application which doesn't fit within this framework, and you're on
>  your own. I, personally, need a good web browser which supports
>  secure transactions; W3 apparently does, but I've been trying to
>  request support and assistance on the various W3 lists with no
>  results. Even a direct email to the author sent several days ago
>  hasn't been answered. So, yes, the ideas are great. But,
>  realistically, I don't see the software world suddenly making the
>  necessary shifts. A smart screen reader which allows users to
>  customize the speech interface while not demanding the large overhead
>  of emacs would grant more freedom in the ability to pick and choose
>  applications without the need for going into review mode and reading
>  one character at a time.

Send another email, whining won't cut it.

> 
> In summary, Emacspeak is a good thing(tm). :) But, it does have its
> shortcomings. It also seems to require the user to simply adopt
> certain beliefs (No frequent upgrading, no re-installing) which are
> pointless and restrictive. I think that what's being proposed here is
> not a chatterbox which speaks constantly, but a small, light-weight
> program which utilizes a standard interface to provide speech in the
> situations where Emacspeak would not be useful, or to even serve as a
> complete Emacspeak replacement for those who don't want to adopt the
> complete Emacs way of life. I don't see this as an unreasonable
> request and, if I had more time on my hands, I'd be more than happy to
> put some of my ideas into code. (Note that I'm not saying I won't, but
> well ... Who knows :)

Will a screen reader instead solve your problems _after_ booting?
>
 
> See 'ya, and have a great spring break for any of you college students
> out there who, like me, have an awesome spring break coming up soon!
> :)

Enjoy!
Hans


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
         ` Whistler
@          ` Hans Zoebelein
             ` Whistler
             ` Whistler
           ` Dave Mielke
  1 sibling, 2 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

Ken, 

are you sure that it has to be the _same_ tool which presents you the 
boot sequence _and_ is used for all administrative work on your box 
later? 

There is a saying: When you want to go hunting and you don't have a dog,
you'll take a cat." But one can wonder if a cat will prove very helpful
as a hunting companion.

Same applies here: You may use tool A for this task and tool B for that
task for optimum performance. If someone prefers to use tool A for all
tasks, results will eventually suffer.

On Thu, 11 Mar 1999, Whistler wrote:

> To answer your question I would like the choice to listen to any part of
> the boot sequence that a sighted person can.  Understand I wouldn't want
> to be forced to listen to any of it but I would like the ability to listen
> to it as it was happening not after the fact.  As much as I hate Acer I
> have to say they did one cool thing back on a 386 25mhz machine the bios
> actually sent boot up information to the serial port and I was able to
> listen to the ram count up and everything including able to set up the
> bios.  That was great when I wanted it in the same way being able to
> listen to all the stuff you listed would be great when needed.  
> 
> Ken /whistler


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
       ` Whistler
@        ` Whistler
           ` Hans Zoebelein
           ` Dave Mielke
  0 siblings, 2 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

To answer your question I would like the choice to listen to any part of
the boot sequence that a sighted person can.  Understand I wouldn't want
to be forced to listen to any of it but I would like the ability to listen
to it as it was happening not after the fact.  As much as I hate Acer I
have to say they did one cool thing back on a 386 25mhz machine the bios
actually sent boot up information to the serial port and I was able to
listen to the ram count up and everything including able to set up the
bios.  That was great when I wanted it in the same way being able to
listen to all the stuff you listed would be great when needed.  

Ken /whistler

On Thu, 11 Mar 1999, Whistler wrote:

> 
> 
> On Thu, 11 Mar 1999, Hans Zoebelein wrote:
> 
> > 
> > Concerning the 'install phenomenon': Many people who are blind want to
> > have speech to 'install the OS' by their own. I think that this serves 
> > for them more as a proof of independence than as an indication, that
> > they have to install their OS on a daily basis. I personally install one
> > time about every every 3 years or so. So the install quote is about 0.091
> > percent of my total computer usage (if an install takes a whole day every
> > three years) 
> 
> 
> 
> 
> Nice I have had to have machines installed 14 times this year and I
> unlike you am not a single machine user I adminester many machines and I
> would give my left nut to be able to install Linux with out having some
> sighted guy sitting there going do youw ant this do you want that.  I
> know of the systems that allow you to install through a serial port but
> that personally sucks.  I have had no real  perfect install with that
> method and until I can install Linux as easy as I could install MsDos I
> will not be pleased.  
> 
> 
> Ken /whistler> > But OK, if you really want this 'boot speech
> feature', you'll
> need a > device which plugs in like a video card and gives you output
> > from power on time. All other devices which hang on a serial or parallel
> > port cannot get output until the kernel is up. So you won't get 'power
> > on speech' but only 'booting speech'. But you have 'booting speech' anyway 
> > because Linux produces a nice file called 'dmsg' which holds the
> > messages produced during your last boot action. Do you really want to
> > hear such interesting stuff like this?
> > 
> > [begin of my /var/log/dmsg file]
> > c310
> > pcibios_init : BIOS32 Service Directory entry at 0xfc740
> > pcibios_init : PCI BIOS revision 2.00 entry at 0xfc770
> > Probing PCI hardware.
> > Calibrating delay loop.. ok - 49.87 BogoMIPS
> > Memory: 63156k/65536k available (616k kernel code, 384k reserved, 1216k
> > data)
> > This processor honours the WP bit even when in supervisor mode. Good.
> > Swansea University Computer Society NET3.035 for Linux 2.0
> > NET3: Unix domain sockets 0.13 for Linux NET3.035.
> > Swansea University Computer Society TCP/IP for NET3.034
> > IP Protocols: ICMP, UDP, TCP
> > VFS: Diskquotas version dquot_5.6.0 initialized
> > Checking 386/387 coupling... Ok, fpu using exception 16 error reporting.
> > Checking 'hlt' instruction... Ok.
> > Linux version 2.0.33 (root@goldfish) (gcc version 2.7.2.3) #17 Tue Jun 30
> > 18:35:
> > 05 CEST 1998
> > Starting kswapd v 1.4.2.2
> > Real Time Clock Driver v1.07
> > Ramdisk driver initialized : 16 ramdisks of 4096K size
> > Floppy drive(s): fd0 is 1.44M, fd1 is 1.2M                       
> > FDC 0 is a National Semiconductor PC87306
> > md driver 0.35 MAX_MD_DEV=4, MAX_REAL=8
> > ncr53c8xx: at PCI bus 0, device 1, function 0
> > ncr53c8xx: 53c810 detected
> > ncr53c810-0: rev=0x01, base=0xfbfef000, io_port=0xe800, irq=9
> > ncr53c810-0: ID 7, Fast-10, Parity Checking
> > ncr53c810-0: restart (scsi reset).
> > scsi0 : ncr53c8xx - revision 2.4a
> > scsi : 1 host.
> > ncr53c810-0-<0,0>: using tagged command queueing, up to 4 cmds/lun
> >   Vendor: SEAGATE   Model: ST12400N          Rev: 8580
> >   Type:   Direct-Access                      ANSI SCSI revision: 02
> > Detected scsi disk sda at scsi0, channel 0, id 0, lun 0
> > ncr53c810-0-<1,0>: using tagged command queueing, up to 4 cmds/lun    
> > ..... and so on and so on....
> > [most part of dmsg cut]
> > 
> > I never look at this file. Would you like to hear this stuff every
> > time your box is booting? I doubt it.
> > 
> > And concerning 'portability': this speech server should be runnable on
> > every UNIX machine under  the sun (sic!) and anywhere else. Trusted
> > sources told me even about Emacspeak on Windows NT. And I was
> > informed that Emacspeak runs even on the ITSY (a small device smaller than
> > a Palm Pilot but with PC performance under Linux. Do we need more
> > portability? What about a Java version of Emacspeak? Shouldn't be
> > too difficult. So we better forget the portability argument.
> > 
> > So I think you are mistaken here.
> > 
> > --Hans 
> > 
> > 
> > On Thu, 11 Mar 1999, Buddy Brannan wrote:
> > 
> > > Ah, Hans, but here's the thing. Am I mistaken in the notion that the
> > > Emacspeak speech server, like Emacspeak itself, requires Emacs? What about
> > > those of us who just want to use a shell without Emacs? (BTW...I'll
> > > probably get Emacspeak going once I get Linux up, but....using Emacspeak to
> > > do an install seems a little impractical....where Brett's svlpro sounds
> > > like it'd work, but you can't use the Emacspeak speech server for this, can
> > > you?) So while Emacspeak's speech server works with a lot of stuff....it
> > > doesn't seem very portable. Unless, of course, I'm mistaken, which wouldn't
> > > surprise me.
> > > 
> > > --
> > > Buddy Brannan
> > > Internet America Customer Care Technician
> > > Email: buddy.brannan@airmail.net
> > > or davros@airmail.net
> > > The 1-800-Be-A-Geek people :)
> > > ------Standard disclaimers re: personal opinions not reflecting the views
> > > of my employer and all that apply here------
> > 
> > ---
> > Send your message for blinux-list to blinux-list@redhat.com
> > Blinux software archive at ftp://leb.net/pub/blinux
> > Blinux web page at http://leb.net/blinux
> > To unsubscribe send mail to blinux-list-request@redhat.com
> > with subject line: unsubscribe
> > 
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
         ` Hans Zoebelein
@          ` Nolan Darilek
             ` Hans Zoebelein
  0 siblings, 1 reply; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=NIL, Size: 6820 bytes --]

   Resent-Cc: recipient list not shown: ;
   MBOX-Line: From blinux-list-request@redhat.com  Thu Mar 11 22:51:01 1999
   MBOX-Line: From foo@bar  Fri Mar 12 04:22:03 1999
   Date: Fri, 12 Mar 1999 04:22:02 +0100 (CET)
   From: Hans Zoebelein <zocki@goldfish.cube.net>
   Resent-From: blinux-list@redhat.com
   Reply-To: blinux-list@redhat.com
   Precedence: list
   Resent-Sender: blinux-list-request@redhat.com

   Does any screen reader solve this problem? Don't use pine but the 
   mail stuff which comes with Emacs. With Emacspeak you even can play
   Tetris. Try this with a screen reader (good luck :).

No. See my below statements, about a "smart" screen reader, one which
allows the user to create a speech interface for the application. For
pine, you could define windows which reduce the amount of speech to
only what is needed, thus making the application much more usable.

   Then you have to use a screen reader which uses C++...

Heh. I had considered writing mine in C++. Though, it doesn't need to
be extended in C++; an imbedded Python interpreter with hooks for
loading specific scripts based on the foreground process would work
nicely. The core of the speech interface (windows, audio widgets,
etc.) would be hard-coded, but they can be instantiated in Python, and
perhaps modified. I'm still investigating the possibilities. This
differs from most standard screen readers which I've seen, because its
configuration is much more flexible.

   Then youn should go back to good old DOS and be happy with config.sys
   and autoexec.bat. There are a lot of tools to get the last free himem
   kilobyte out of your box  Btw. how many megs might a
   screen reader need which runs under another OS like MS Windows?

This mentality is what I'm trying to avoid. Rescue disks are perfect
examples of where smaller products would be much better. In one
instance, I somehow managed to trash my Linux partition. Thus, I
couldn't use Emacspeak, unless you've figured out how to cram Emacs
and Emacspeak onto a floppy. :) Granted, zip disks can avoid this, and
I'll one day look into the possibility of creating a sort of 'Red
Cross Rescue Disk', one which gives you all standard rescue utilities,
Emacs/Emacspeak, and tools to log onto the net for support. :)

   What about using a hardware braille line? Or you may use a hardware
   speech synth which uses the serial option of the 2.2 kernel. But I doubt
   that you will use this tool for after boot work.

Certainly options worth considering. I'd love to be able to use brltty
with my Alva, but I haven't quite gotten it to work through the serial
port. I might try the parallel driver one of these days. The major
problem I had with that at the time was that it required root
privilages. I wasn't sure if I could just launch it in the background
and login as myself, or if I had to be root constantly to use it.

   Send another email, whining won't cut it.

I'm generally a very easy-going and difficult-to-anger individual. But
this one line, well, it frustrated and angered me quite a bit.

First, my previous post was meant to be an intelligent list of my
concerns with the current access solutions available for
Linux. Everyone seems to constantly praise Emacspeak and, while I feel
that it is great, I wanted to offer some of my issues with it. One of
those issues was the fact that relatively few applications were
well-supported, speech-wise, and that support for other applications
was non-existant, or not adequate enough. I'm trying to make due with
the applications which are supported, but I'm finding that my needs
aren't being met in certain areas, and that other speech solutions
would allow users to use other applications as easily. I currently
lack the experience to delve into Elisp source, so when W3 doesn't
work, I need to be able to request assistance from developers and
lists. When UT's massive bureaucracy points to a secure webpage, I'd
like to be able to click on the link and go there. Similarly,
professor rating sheets require the use of a UTEID (University of
Texas Electronic ID, gotta love acronyms :) and this, too, uses secure
transactions. Anyhow, my requests for support haven't accomplished
anything, I'm thinking about setting up a cron job to send out my
saved post once every few weeks or so :), which is why I feel that a
more adaptable access solution is needed, one which doesn't depend on
application-specific modifications to provide auditory output. The
argument can be made that authors of opensource projects aren't
obligated to provide support for their work, and I agree with this
wholeheartedly; I've been guilty of this on an occasion or two. :)
But, sighted users can choose to use another web browser if they
experience problems like mine; I can't, or at least, not as easily.

But, apparently, that point was missed. Instead, you choose to imply
that I was 'whining'. I wouldn't have minded a more politely-stated
rebuttle of my point, but that wasn't given. I also have nothing
against intelligent arguing and debating, and while this post does for
the most part accomplish this, I feel that some care should be taken
in what is said. It's quite possible that I'm misunderstanding your
intent, but I'm reminded of the old saying: 'If you can't say anything
good . . .' :) So, let's try to keep the debate on an intellectual
level, and not a confrontational one. Remember, agree to
disagree... Sorry for the cliches, they've been pounded into my head
during the course of some conflict res. classes. :)

   Will a screen reader instead solve your problems _after_ booting?

If I am unable to access Emacs/Emacspeak, or want to use an
application which I don't know enough about Elisp to extend, yes it
will. Though, Emacs is a diverse environment, and I don't think a
screen reader will ever provide the level of flexibility that
Emacspeak does when Emacs is being used. So, I would use a screen
reader for accessing lynx, using pine and playing character-based
roguelikes and such, while I'd use Emacspeak whenever I need to code,
write papers or games, etc.

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
   Don't worry about the world coming to an end today.  It's already tomorrow
                                 in Australia.
                              		-- Charles Schulz


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
             ` Whistler
@              ` Hans Zoebelein
  0 siblings, 0 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

Don't play clumsy now. Reading messages during boot time with an
application which depends on the booted OS is a pretty different task from
reading mail, writing a letter,  calculating a spread sheet, formatting
text for a publication or playing Tetris.

The only thing which can help you here is an autonomous device which isn't
dependent on the booting CPU like a hardware speech synth. The famous
(software) screen reader leaves you as dead in the water as any
other pure software approach. 

While you will enjoy the autonomy of the hardware approach
during boot time, you  might want a more flexible software approach later
depending on what you expect your box should do for you. If you
always _only_ want read texts, a hardware braille display should be
perfect from booting until shut down.

--Hans


On Thu, 11 Mar 1999, Whistler wrote:

> 
> Hmm your interesting statement just struck me as well you said if someone
> uses  tool a for one task and tool B for another and someone decides to
> use tool a for all tasks results will suffer um isn't that what we are
> doing with emacspeak trying to do all tasks with tool A?   
> 
> 


























> > Same applies here: You may use tool A for this task and tool B for that
> > task for optimum performance. If someone prefers to use tool A for all
> > tasks, results will eventually suffer.
> > 
> > On Thu, 11 Mar 1999, Whistler wrote:
> > 
> > > To answer your question I would like the choice to listen to any part of
> > > the boot sequence that a sighted person can.  Understand I wouldn't want
> > > to be forced to listen to any of it but I would like the ability to listen
> > > to it as it was happening not after the fact.  As much as I hate Acer I
> > > have to say they did one cool thing back on a 386 25mhz machine the bios
> > > actually sent boot up information to the serial port and I was able to
> > > listen to the ram count up and everything including able to set up the
> > > bios.  That was great when I wanted it in the same way being able to
> > > listen to all the stuff you listed would be great when needed.  
> > > 
> > > Ken /whistler
> > 
> > ---
> > Send your message for blinux-list to blinux-list@redhat.com
> > Blinux software archive at ftp://leb.net/pub/blinux
> > Blinux web page at http://leb.net/blinux
> > To unsubscribe send mail to blinux-list-request@redhat.com
> > with subject line: unsubscribe
> > 
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
           ` Hans Zoebelein
@            ` Whistler
             ` Whistler
  1 sibling, 0 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

Ah no that is not what I am saying what I am saying is it would be nice to
have the tools.  Thinking of it from a Unix stand point Emacs is um well
bloated.  While I don't mind using it for many tasks.  I would like a
sleak AI style speech package for other tasks and yes a boot strap speech
package for installation.  So no I am not looking for a hunting dog to be
my guide dog I just want the tools to do everything a sighted user can.
While I am wishing I wouldn't mind a million dollars as well.

Ken /whistler

On Fri, 12 Mar 1999, Hans Zoebelein wrote:

> Ken, 
> 
> are you sure that it has to be the _same_ tool which presents you the 
> boot sequence _and_ is used for all administrative work on your box 
> later? 
> 
> There is a saying: When you want to go hunting and you don't have a dog,
> you'll take a cat." But one can wonder if a cat will prove very helpful
> as a hunting companion.
> 
> Same applies here: You may use tool A for this task and tool B for that
> task for optimum performance. If someone prefers to use tool A for all
> tasks, results will eventually suffer.
> 
> On Thu, 11 Mar 1999, Whistler wrote:
> 
> > To answer your question I would like the choice to listen to any part of
> > the boot sequence that a sighted person can.  Understand I wouldn't want
> > to be forced to listen to any of it but I would like the ability to listen
> > to it as it was happening not after the fact.  As much as I hate Acer I
> > have to say they did one cool thing back on a 386 25mhz machine the bios
> > actually sent boot up information to the serial port and I was able to
> > listen to the ram count up and everything including able to set up the
> > bios.  That was great when I wanted it in the same way being able to
> > listen to all the stuff you listed would be great when needed.  
> > 
> > Ken /whistler
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
           ` Hans Zoebelein
             ` Whistler
@            ` Whistler
               ` Hans Zoebelein
  1 sibling, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list


Hmm your interesting statement just struck me as well you said if someone
uses  tool a for one task and tool B for another and someone decides to
use tool a for all tasks results will suffer um isn't that what we are
doing with emacspeak trying to do all tasks with tool A?   



On Fri, 12 Mar 1999, Hans Zoebelein wrote:

> Ken, 
> 
> are you sure that it has to be the _same_ tool which presents you the 
> boot sequence _and_ is used for all administrative work on your box 
> later? 
> 
> There is a saying: When you want to go hunting and you don't have a dog,
> you'll take a cat." But one can wonder if a cat will prove very helpful
> as a hunting companion.
> 
> Same applies here: You may use tool A for this task and tool B for that
> task for optimum performance. If someone prefers to use tool A for all
> tasks, results will eventually suffer.
> 
> On Thu, 11 Mar 1999, Whistler wrote:
> 
> > To answer your question I would like the choice to listen to any part of
> > the boot sequence that a sighted person can.  Understand I wouldn't want
> > to be forced to listen to any of it but I would like the ability to listen
> > to it as it was happening not after the fact.  As much as I hate Acer I
> > have to say they did one cool thing back on a 386 25mhz machine the bios
> > actually sent boot up information to the serial port and I was able to
> > listen to the ram count up and everything including able to set up the
> > bios.  That was great when I wanted it in the same way being able to
> > listen to all the stuff you listed would be great when needed.  
> > 
> > Ken /whistler
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
         ` Whistler
           ` Hans Zoebelein
@          ` Dave Mielke
             ` Whistler
  1 sibling, 1 reply; 77+ messages in thread
From: Dave Mielke @  UTC (permalink / raw)
  To: blinux-list

On Thu, 11 Mar 1999, Whistler wrote:
>I would like the choice to listen to any part of
>the boot sequence that a sighted person can.  Understand I wouldn't want
>to be forced to listen to any of it but I would like the ability to listen
>to it as it was happening not after the fact.  

I think that you have a misconception regarding how easy it is for a sighted
person to really know what is happening by watching the screen as the system
boots. Perhaps, in the days of slower processors, it was easy. Now, however,
with faster processors, the output scrolls by too quickly. The best a sighted
person can do is focus very intently on the screen and try to pick out an error
message or two. While the eye is suited for this task, the ear is not. You
would be unable to listen to your system boot sequence spoken at break-neck
speed, and pick out anything at all, let alone discern an error message from
the rest of the noise.

A sighted person in search of a system boot error usually does exactly what you
can also do with a screen reader running in user space. He either reviews the
system log files with an editor, or uses Shift-PageUp to scroll backward in the
console buffer once the system has finished booting. The latter option, of
course, would require that the screen reader process be implemented in a way
which most aren't, i.e. it must be startable somewhere within the boot sequence
itself (after the point at which the synthesizer's serial port has been
coonfigured and enabled), and, therefore, can necessarily be running
independently of, and asynchronously to, any login session.

I'm not sure that it would be very easy to get EmacSpeak to work properly when
run in this way, i.e. separate from, and asynchronous to, all user login
sessions. Its biggest problem in this regard is that it would be insecure
because it's part of Emacs, which means that it would have access to all kinds
of system resources without the benefit of prior user authentication. Another
significant problem is that Emacs, at least at present, is neither capable of
passively monitoring screen content which is written by other unrelated
processes nor capable of only picking off those key sequences which are
intended for itself and letting the rest be independently interpreted by
unrelated processes.

I am a braille display user, rather than a speech user, and BRLTTY, the Linuyx
braille display driver, actually does run as a completely independent process
which can be started very early in the boot sequence. I have a great deal of
experiential appreciation, therefore, for the degree of additional flexibility
and freedom which I have over the typical screen reader user. I don't mind if I
need to wait for a sighted person to be around when doing an install, although
I can surely understand that a blind system administrator for a sizeable group
of users would, but I sure do mind when I am hit with a system failure, either
in the middle of the night when no one else is awake, or during the day when
all of my older children (the ones who can read) are at school.

Having a user space screen reader process running independent of any login
session would make it possible for its user, as BRLTTY already does for me, to
detect points in the boot sequence at which the system appears to, or actually
does, hang, and if, for whatever extremely rare reason, the login prompt
doesn't appear, you wouldn't be stuck. One such situation, which I have
personally been hit by, is when there is a serious file system corruption
problem as the boot sequence then enters single user mode so that you can
repair the problem before any further damage and/or data loss occurs. There
have been a number of such times, especially being as I work at home a lot,
when my inability to read the screen could have cost me many hours of personal
frustration and employer-paid time.

-- 
Dave Mielke           | 856 Grenon Avenue | I believe that the Bible is the
Phone: 1-613-726-0014 | Ottawa, Ontario   | Word of God. Please contact me
EMail: dave@mielke.cc | Canada  K2B 6G3   | if you're concerned about Hell.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
           ` Nolan Darilek
@            ` Hans Zoebelein
               ` Nolan Darilek
  0 siblings, 1 reply; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

Ok, that's the last for this night. We have here about 6 am and I'll take
a cup of sleep. Sorry if my "whining" comment offended you. I always
feel that I have to whine for myself too when I have to think about the 
sickly DOS thing.

> 
>    Does any screen reader solve this problem? Don't use pine but the 
>    mail stuff which comes with Emacs. With Emacspeak you even can play
>    Tetris. Try this with a screen reader (good luck :).
> 
> No. See my below statements, about a "smart" screen reader, one which
> allows the user to create a speech interface for the application. For
> pine, you could define windows which reduce the amount of speech to
> only what is needed, thus making the application much more usable.

OK then you write a speech interface for pine. But this isn't a screen
reader anymore but a user interface which still needs a decent speech 
server to function.

> 
>    Then you have to use a screen reader which uses C++...
> 
> Heh. I had considered writing mine in C++. Though, it doesn't need to
> be extended in C++; an imbedded Python interpreter with hooks for
> loading specific scripts based on the foreground process would work
> nicely. The core of the speech interface (windows, audio widgets,
> etc.) would be hard-coded, but they can be instantiated in Python, and
> perhaps modified. I'm still investigating the possibilities. This
> differs from most standard screen readers which I've seen, because its
> configuration is much more flexible.

Good luck.

> 
>    Then youn should go back to good old DOS and be happy with config.sys
>    and autoexec.bat. There are a lot of tools to get the last free himem
>    kilobyte out of your box  Btw. how many megs might a
>    screen reader need which runs under another OS like MS Windows?
> 
> This mentality is what I'm trying to avoid. Rescue disks are perfect
> examples of where smaller products would be much better. In one
> instance, I somehow managed to trash my Linux partition. Thus, I
> couldn't use Emacspeak, unless you've figured out how to cram Emacs
> and Emacspeak onto a floppy. :) Granted, zip disks can avoid this, and
> I'll one day look into the possibility of creating a sort of 'Red
> Cross Rescue Disk', one which gives you all standard rescue utilities,
> Emacs/Emacspeak, and tools to log onto the net for support. :)

If you use the worst case of a system crash as your guide line, OK. I
would  use a bootable CD-ROM which holds a mountable life file system on
it to recover.
 
> 
>    What about using a hardware braille line? Or you may use a hardware
>    speech synth which uses the serial option of the 2.2 kernel. But I doubt
>    that you will use this tool for after boot work.
> 
> Certainly options worth considering. I'd love to be able to use brltty
> with my Alva, but I haven't quite gotten it to work through the serial
> port. I might try the parallel driver one of these days. The major
> problem I had with that at the time was that it required root
> privilages. I wasn't sure if I could just launch it in the background
> and login as myself, or if I had to be root constantly to use it.

Most power users would be very unhappy with this approach. They probably
would feel like amputated.
> 
>    Send another email, whining won't cut it.
> 
> I'm generally a very easy-going and difficult-to-anger individual. But
> this one line, well, it frustrated and angered me quite a bit.
> 
> First, my previous post was meant to be an intelligent list of my
> concerns with the current access solutions available for
> Linux. Everyone seems to constantly praise Emacspeak and, while I feel
> that it is great, I wanted to offer some of my issues with it. One of
> those issues was the fact that relatively few applications were
> well-supported, speech-wise, and that support for other applications
> was non-existant, or not adequate enough. I'm trying to make due with
> the applications which are supported, but I'm finding that my needs
> aren't being met in certain areas, and that other speech solutions
> would allow users to use other applications as easily. I currently
> lack the experience to delve into Elisp source, so when W3 doesn't
> work, I need to be able to request assistance from developers and
> lists. When UT's massive bureaucracy points to a secure webpage, I'd
> like to be able to click on the link and go there. Similarly,
> professor rating sheets require the use of a UTEID (University of
> Texas Electronic ID, gotta love acronyms :) and this, too, uses secure
> transactions. Anyhow, my requests for support haven't accomplished
> anything, I'm thinking about setting up a cron job to send out my
> saved post once every few weeks or so :), which is why I feel that a

My support expericences are excellent, so I cannot complain
about support problems.

> more adaptable access solution is needed, one which doesn't depend on
> application-specific modifications to provide auditory output. The

How do you want to become more adaptable when you don't want to become
application specific? Adpatable to what when not the application? 

> argument can be made that authors of opensource projects aren't
> obligated to provide support for their work, and I agree with this
> wholeheartedly; I've been guilty of this on an occasion or two. :)
> But, sighted users can choose to use another web browser if they
> experience problems like mine; I can't, or at least, not as easily.
>
> But, apparently, that point was missed. Instead, you choose to imply
> that I was 'whining'. I wouldn't have minded a more politely-stated
> rebuttle of my point, but that wasn't given. I also have nothing
> against intelligent arguing and debating, and while this post does for
> the most part accomplish this, I feel that some care should be taken
> in what is said. It's quite possible that I'm misunderstanding your
> intent, but I'm reminded of the old saying: 'If you can't say anything
> good . . .' :) So, let's try to keep the debate on an intellectual
> level, and not a confrontational one. Remember, agree to
> disagree... Sorry for the cliches, they've been pounded into my head
> during the course of some conflict res. classes. :)

OK sorry if the "whining" thing offendend you... 
> 
>    Will a screen reader instead solve your problems _after_ booting?
> 
> If I am unable to access Emacs/Emacspeak, or want to use an
> application which I don't know enough about Elisp to extend, yes it
> will. Though, Emacs is a diverse environment, and I don't think a
> screen reader will ever provide the level of flexibility that
> Emacspeak does when Emacs is being used. So, I would use a screen
> reader for accessing lynx, using pine and playing character-based
> roguelikes and such, while I'd use Emacspeak whenever I need to code,
> write papers or games, etc.

So you'll have to use your preferred language to write a screen reader 
extension... And then you'll still need a decent speech server...


--Hans


> 


> ,-----------------------------------------------------------------------------.
> >              Nolan Darilek             |                                    <
> >           nolan_d@bigfoot.com          |    University of Texas at Austin   <
> >         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
> >              ICQ: 15709478             |      Orientation Advisor, 1999     <
> >    http://www.bigfoot.com/~nolan_d/    |                                    <
> `-----------------------------------------------------------------------------'
>    Don't worry about the world coming to an end today.  It's already tomorrow
>                                  in Australia.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
             ` Hans Zoebelein
@              ` Nolan Darilek
                 ` Ann K. Parsons
                                 ` (2 more replies)
  0 siblings, 3 replies; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=NIL, Size: 7708 bytes --]

   Resent-Cc: recipient list not shown: ;
   MBOX-Line: From blinux-list-request@redhat.com  Fri Mar 12 00:30:51 1999
   MBOX-Line: From foo@bar  Fri Mar 12 06:25:09 1999
   Date: Fri, 12 Mar 1999 06:25:08 +0100 (CET)
   From: Hans Zoebelein <zocki@goldfish.cube.net>
   Resent-From: blinux-list@redhat.com
   Reply-To: blinux-list@redhat.com
   Precedence: list
   Resent-Sender: blinux-list-request@redhat.com

   Ok, that's the last for this night. We have here about 6 am and I'll take
   a cup of sleep. Sorry if my "whining" comment offended you. I always
   feel that I have to whine for myself too when I have to think about the 
   sickly DOS thing.

Heh, no problem. Sometimes thinking about DOS, and microsoft products
myself is enough to turn my stomach. :)

   OK then you write a speech interface for pine. But this isn't a screen
   reader anymore but a user interface which still needs a decent speech 
   server to function.

Hmm. Maybe I should refer to it as that, then. In my mind, though, a
screen reader is anything that reads the contents of the screen. In my
mind, Emacspeak fits that bill. Though Emacspeak IS reading from an
Emacs buffer, it is reading the material which is appearing on my
monitor as I type this. :) But yes, this would be a twist on the
Emacspeak interface concept; instead of relying on Emacs, you're
relying on a smaller driver which provides many similar
functions. And, I agree completely concerning the speech servers
created for Emacspeak. A few months ago I began some initial coding
and testing, and created classes which interfaced with an Emacspeak
speech server. My only issue is that it appears to rely on 'dectalk'
command prefixes. I.e. 'dectalk_say' instead of just 'say'. Though
this is more of an asthetic concern than a functionality/usability one.

   > 
   >    Then you have to use a screen reader which uses C++...
   > 
   > Heh. I had considered writing mine in C++. Though, it doesn't need to
   > be extended in C++; an imbedded Python interpreter with hooks for
   > loading specific scripts based on the foreground process would work
   > nicely. The core of the speech interface (windows, audio widgets,
   > etc.) would be hard-coded, but they can be instantiated in Python, and
   > perhaps modified. I'm still investigating the possibilities. This
   > differs from most standard screen readers which I've seen, because its
   > configuration is much more flexible.

   Good luck.

Thanks. I'll need it. :)

   > 
   >    Then youn should go back to good old DOS and be happy with config.sys
   >    and autoexec.bat. There are a lot of tools to get the last free himem
   >    kilobyte out of your box  Btw. how many megs might a
   >    screen reader need which runs under another OS like MS Windows?
   > 
   > This mentality is what I'm trying to avoid. Rescue disks are perfect
   > examples of where smaller products would be much better. In one
   > instance, I somehow managed to trash my Linux partition. Thus, I
   > couldn't use Emacspeak, unless you've figured out how to cram Emacs
   > and Emacspeak onto a floppy. :) Granted, zip disks can avoid this, and
   > I'll one day look into the possibility of creating a sort of 'Red
   > Cross Rescue Disk', one which gives you all standard rescue utilities,
   > Emacs/Emacspeak, and tools to log onto the net for support. :)

   If you use the worst case of a system crash as your guide line, OK. I
   would  use a bootable CD-ROM which holds a mountable life file system on
   it to recover.

I considered that, too. A CDROM live filesystem with a preconfigured
version of Emacspeak on it would be awesome.

   How do you want to become more adaptable when you don't want to become
   application specific? Adpatable to what when not the application? 

Let me use Pine as an example. (Please, someone, jump in and correct
me if I'm wrong.) The Emacspeak/auditory interface method for
speech-enabling Pine, assuming Pine were an Elisp subsystem, would be
to modify the code for Pine itself. I can see several problems created
by this issue alone. First, it is easy to say that, since users have
the source and materials, that they can 'do it themselves'. This may
be difficult in some cases. Though I've been programming for years,
some of the concepts of Elisp confused me; they were very different
from the imperitive languages I'm used to. I've been taking a
functional programming course, so I may find it a little easier now
that I know more than I want to know about recursion and such. Anyhow,
assuming that you can make the modifications, and assuming that the
author is agreeable to including them in the original distribution,
there wouldn't be a problem.

Now, we'll examine the scenario using my proposed speech interface. If
you find an application which you are interested in using,
speech-enabling it could be done completely by the user, and no effort
has to be expended to modify the original source tree. A user could
start Pine and, using a sort of dumb screen review interface, identify
components of the interface which should and shouldn't be
spoken. Then, the user could create a script which handles the
auditory interface. Python would be a good imperative language to use,
though of something like SWIG (http://www.swig.org) is used, creating
Python, Perl and TCL interfaces to the same facilities from the same
codebase would be a snap. Thus, if a user has problems with TCL, maybe
Python or Perl would be easier to grasp. With a pool of
user-contributable interfaces, you wouldn't be limited to a few mail
applications and one or two web browsers. Removing the need to modify
the source of the original application would, IMHO, increase the
amount of available, speech-enabled applications.

   OK sorry if the "whining" thing offendend you... 

Again, no problem. :) I wanted to avoid any perceptions that I was
complaining about the 'state of the union' as it were. I'm merely
expressing concerns, mentioning my ideas for solutions, and hoping
for any compliments/criticisms of these ideas. I'm not a UI expert,
and don't have much in-school experience with computer science, so I
welcome wholeheartedly anyone who can point out flaws with my ideas,
so I can work around them, or if need be, scrap the idea completely.

   So you'll have to use your preferred language to write a screen reader 
   extension... And then you'll still need a decent speech server...

Absolutely! :) And I have nothing against the existing Emacspeak
servers, except for the minor asthetic point I mentioned above, since
I'm one of those picky and detail-oriented types. But, it works, so I
won't complain.

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
                             REAL criminal reports.
          The Ann Arbor News crime column reported that a man walked into a
           Burger King in Ypsilanti, Michigan at 7:50am, flashed a gun and
        demanded cash. The clerk turned him down because he said he couldn't
          open the cash register without a food order. When the man ordered
        onion rings, the clerk said they weren't available for breakfast. The
                            man, frustrated, walked away.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
           ` Dave Mielke
@            ` Whistler
  0 siblings, 0 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list


Neither of your suggested methods would have caught my last boot up error.
The error had to do with a very bbad disk drive and would not let Linux
complete the boot up.  From what I know speaqualizer will let me do this
but I haven't been able to get600 $ free to give that a shot.

By the way I had to wait on a sited friend to come over and tell me what
was being repeadidly printed to the screen go figure.

Ken /whsitler

On Thu, 11 Mar 1999, Dave Mielke wrote:

> On Thu, 11 Mar 1999, Whistler wrote:
> >I would like the choice to listen to any part of
> >the boot sequence that a sighted person can.  Understand I wouldn't want
> >to be forced to listen to any of it but I would like the ability to listen
> >to it as it was happening not after the fact.  
> 
> I think that you have a misconception regarding how easy it is for a sighted
> person to really know what is happening by watching the screen as the system
> boots. Perhaps, in the days of slower processors, it was easy. Now, however,
> with faster processors, the output scrolls by too quickly. The best a sighted
> person can do is focus very intently on the screen and try to pick out an error
> message or two. While the eye is suited for this task, the ear is not. You
> would be unable to listen to your system boot sequence spoken at break-neck
> speed, and pick out anything at all, let alone discern an error message from
> the rest of the noise.
> 
> A sighted person in search of a system boot error usually does exactly what you
> can also do with a screen reader running in user space. He either reviews the
> system log files with an editor, or uses Shift-PageUp to scroll backward in the
> console buffer once the system has finished booting. The latter option, of
> course, would require that the screen reader process be implemented in a way
> which most aren't, i.e. it must be startable somewhere within the boot sequence
> itself (after the point at which the synthesizer's serial port has been
> coonfigured and enabled), and, therefore, can necessarily be running
> independently of, and asynchronously to, any login session.
> 
> I'm not sure that it would be very easy to get EmacSpeak to work properly when
> run in this way, i.e. separate from, and asynchronous to, all user login
> sessions. Its biggest problem in this regard is that it would be insecure
> because it's part of Emacs, which means that it would have access to all kinds
> of system resources without the benefit of prior user authentication. Another
> significant problem is that Emacs, at least at present, is neither capable of
> passively monitoring screen content which is written by other unrelated
> processes nor capable of only picking off those key sequences which are
> intended for itself and letting the rest be independently interpreted by
> unrelated processes.
> 
> I am a braille display user, rather than a speech user, and BRLTTY, the Linuyx
> braille display driver, actually does run as a completely independent process
> which can be started very early in the boot sequence. I have a great deal of
> experiential appreciation, therefore, for the degree of additional flexibility
> and freedom which I have over the typical screen reader user. I don't mind if I
> need to wait for a sighted person to be around when doing an install, although
> I can surely understand that a blind system administrator for a sizeable group
> of users would, but I sure do mind when I am hit with a system failure, either
> in the middle of the night when no one else is awake, or during the day when
> all of my older children (the ones who can read) are at school.
> 
> Having a user space screen reader process running independent of any login
> session would make it possible for its user, as BRLTTY already does for me, to
> detect points in the boot sequence at which the system appears to, or actually
> does, hang, and if, for whatever extremely rare reason, the login prompt
> doesn't appear, you wouldn't be stuck. One such situation, which I have
> personally been hit by, is when there is a serious file system corruption
> problem as the boot sequence then enters single user mode so that you can
> repair the problem before any further damage and/or data loss occurs. There
> have been a number of such times, especially being as I work at home a lot,
> when my inability to read the screen could have cost me many hours of personal
> frustration and employer-paid time.
> 
> -- 
> Dave Mielke           | 856 Grenon Avenue | I believe that the Bible is the
> Phone: 1-613-726-0014 | Ottawa, Ontario   | Word of God. Please contact me
> EMail: dave@mielke.cc | Canada  K2B 6G3   | if you're concerned about Hell.
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
               ` Nolan Darilek
@                ` Ann K. Parsons
                   ` Nolan Darilek
                 ` Hans Zoebelein
                 ` speech standard T. V. Raman
  2 siblings, 1 reply; 77+ messages in thread
From: Ann K. Parsons @  UTC (permalink / raw)
  To: blinux-list

Hi all,

Nolan, could you please write your messages as regular text instead of
sending them as attachments?  I'm having some difficulty reading
them.  Thanks.

Ann P.

-- 
Ann K. Parsons
email:  akp@eznet.net
web site:  http://home.eznet.net/~akp
MICQ Number:  33006854

"All that is gold does not glitter.
Not all those who wander are lost."
J.R.R. Tolkien


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                 ` Ann K. Parsons
@                  ` Nolan Darilek
                     ` T. V. Raman
  0 siblings, 1 reply; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=NIL, Size: 1603 bytes --]

   Resent-Cc: recipient list not shown: ;
   MBOX-Line: From blinux-list-request@redhat.com  Fri Mar 12 08:20:52 1999
   From: "Ann K. Parsons" <akp@eznet.net>
   Date: Fri, 12 Mar 1999 08:11:16 -0500 (EST)
   Resent-From: blinux-list@redhat.com
   Reply-To: blinux-list@redhat.com
   Precedence: list
   Resent-Sender: blinux-list-request@redhat.com

   Hi all,

   Nolan, could you please write your messages as regular text instead of
   sending them as attachments?  I'm having some difficulty reading
   them.  Thanks.

   Ann P.

Interesting. This isn't happening for me; all of the messages I get
back look fine. I also just sent myself a test message with no
problems. I'm using Rmail; does anyone have any ideas about what may
be causing this?

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
      Excerpt from a conversation between a customer support person and a
      customer working for a well-known military-affiliated research lab:
              Support:  "You're not our only customer, you know."
      Customer: "But we're one of the few with tactical nuclear weapons."


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
               ` Nolan Darilek
                 ` Ann K. Parsons
@                ` Hans Zoebelein
                   ` Nolan Darilek
                   ` Speech-enabling approach (was Re: speech standard) Jason White
                 ` speech standard T. V. Raman
  2 siblings, 2 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

On 12 Mar 1999, Nolan Darilek wrote:

[stuff omitted]

> Hmm. Maybe I should refer to it as that, then. In my mind, though, a
> screen reader is anything that reads the contents of the screen. In my
> mind, Emacspeak fits that bill. Though Emacspeak IS reading from an

NO! A screen reader reads the screen. Emacspeak is an environment with
built in intelligence which even adds (or omits) output 
to improve your interaction with your workstation. You get a better grip
on your box. 

And  yes,  one of the possible  modes you can run Emacspeak in really is
in screen reader mode (but what you will do this only if there isn't any
other more intelligent mode to use available.)

[stuff omitted]

> created for Emacspeak. A few months ago I began some initial coding
> and testing, and created classes which interfaced with an Emacspeak
> speech server. My only issue is that it appears to rely on 'dectalk'
> command prefixes. I.e. 'dectalk_say' instead of just 'say'. Though
> this is more of an asthetic concern than a functionality/usability one.

You could rewrite the commands as you want them to be named. Its only that
programmers are lazy and normally rewrite code which is running... 

[ stuff omitted]
> 
>    If you use the worst case of a system crash as your guide line, OK. I
>    would  use a bootable CD-ROM which holds a mountable life file system on
>    it to recover.
> 
> I considered that, too. A CDROM live filesystem with a preconfigured
> version of Emacspeak on it would be awesome.

It's on your fingertips if you own a CD-ROM burner..

> 
>    How do you want to become more adaptable when you don't want to become
>    application specific? Adpatable to what when not the application? 

[stuff about a writing a programmable application to user interface
omitted]

OK write such an interface. It will be an interesting task. The other
approach is to go into the source tree and pipe stuff to a speech
server (which IMHO will give you the best results). Booth cases have one
thing in common: the need of a decent speech server.

[stuff omitted]
> 
>                    And I have nothing against the existing Emacspeak
> servers, except for the minor asthetic point I mentioned above, since
> I'm one of those picky and detail-oriented types. But, it works, so I
> won't complain.

Some 'grep', 'sed' and 'akw' treatment of the source files will let 
adopt the commands the names you prefer.  


--Hans


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
               ` Nolan Darilek
                 ` Ann K. Parsons
                 ` Hans Zoebelein
@                ` T. V. Raman
                   ` Interfaces (Was Re: speech standard) Nolan Darilek
  2 siblings, 1 reply; 77+ messages in thread
From: T. V. Raman @  UTC (permalink / raw)
  To: blinux-list

the assertion  that Emacspeak modifies the underlying lisp
source of the application is false --in fact one of its
unique implementation features to speech-enable apps without
modifying their source.
If you're interested in learning about it there is plenty of
info on how it is done on my WWW site.

-- 
Best Regards,
--raman

      Adobe Systems                 Tel: 1 408 536 3945   (W14-128)
      Advanced Technology Group     Fax: 1 408 537 4042 
      W14-128 345 Park Avenue     Email: raman@adobe.com 
      San Jose , CA 95110 -2704     Email:  raman@cs.cornell.edu
      http://labrador.corp.adobe.com/~raman/        (Adobe Intranet)
      http://cs.cornell.edu/home/raman/    (Cornell)
----------------------------------------------------------------------
    Disclaimer: The opinions expressed are my own and in no way should be taken
as representative of my employer, Adobe Systems Inc.
____________________________________________________________


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                 ` Hans Zoebelein
@                  ` Nolan Darilek
                     ` Whistler
                   ` Speech-enabling approach (was Re: speech standard) Jason White
  1 sibling, 1 reply; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=NIL, Size: 2661 bytes --]

   Date: Fri, 12 Mar 1999 16:39:31 +0100 (CET)
   From: Hans Zoebelein <zocki@goldfish.cube.net>

   NO! A screen reader reads the screen. Emacspeak is an environment with
   built in intelligence which even adds (or omits) output 
   to improve your interaction with your workstation. You get a better grip
   on your box. 

How is this different from, say, vocal-eyes? With VE, I could
configure an application to speak its menubars when I hit alt. This,
to me, is some level of 'built-in intelligence'. Similarly,
multi-columned output could be implemented by dynamically creating two
windows based on screen content, and allowing each column to be
read. Emacspeak improves on these concepts, but I think that they have
been present in many previous screen readers, albeit to a lesser
extent. Emacspeak refers to them as Elisp modifications, advice, etc.;
VE refers to set files.

   You could rewrite the commands as you want them to be named. Its only that
   programmers are lazy and normally rewrite code which is running... 

The classes used say() methods which transmitted the string directly to
the pipe, prepending it with 'dectalk_say' and checking for
errors. I'm hesitant to modify existing speech servers because then
you create diverging standards.

   It's on your fingertips if you own a CD-ROM burner..

Heh, don't have the money. (Actually I do, but I've got my sites set on
one of those Diamond Rio portable MP3 players. :)

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
                             REAL criminal reports.
          Drug-possession defendant Christopher Johns, on trial in March in
         Pontiac, Michigan, said he had been searched without a warrant. The
       prosecutor said the officer didn't need a warrant because of a "bulge"
            in Christopher's jacket could have been a gun. Nonsense, said
         Christopher, who happened to be wearing the same jacket that day in
            court. He handed it over so the judge could see it. The judge
         discovered a packet of cocaine in the pocket and laughed so hard he
                  required a five-minute recess to compose himself.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                   ` Nolan Darilek
@                    ` T. V. Raman
  0 siblings, 0 replies; 77+ messages in thread
From: T. V. Raman @  UTC (permalink / raw)
  To: blinux-list

It looks like you've misconfigured TM --the multimedia mail
extension for rmail and text messages are being sent out as
mime attachments --and worse with a charset value of nil.

You need to go consult the rmail documentation --I dont use rmail.
>>>>> "Nolan" == Nolan Darilek <nolan_d@bigfoot.com> writes:

    Nolan>    Resent-Cc: recipient list not shown: ;
    Nolan> MBOX-Line: From blinux-list-request@redhat.com
    Nolan> Fri Mar 12 08:20:52 1999 From: "Ann K. Parsons"
    Nolan> <akp@eznet.net> Date: Fri, 12 Mar 1999 08:11:16
    Nolan> -0500 (EST) Resent-From: blinux-list@redhat.com
    Nolan> Reply-To: blinux-list@redhat.com Precedence: list
    Nolan> Resent-Sender: blinux-list-request@redhat.com

    Nolan>    Hi all,

    Nolan>    Nolan, could you please write your messages as
    Nolan> regular text instead of sending them as
    Nolan> attachments?  I'm having some difficulty reading
    Nolan> them.  Thanks.

    Nolan>    Ann P.

    Nolan> Interesting. This isn't happening for me; all of
    Nolan> the messages I get back look fine. I also just
    Nolan> sent myself a test message with no problems. I'm
    Nolan> using Rmail; does anyone have any ideas about
    Nolan> what may be causing this?

    Nolan> ,-----------------------------------------------------------------------------.
    >> Nolan Darilek | < nolan_d@bigfoot.com | University of
    >> Texas at Austin < nolan_d@mail.utexas.edu | Freshman,
    >> Computer Science < ICQ: 15709478 | Orientation
    >> Advisor, 1999 < http://www.bigfoot.com/~nolan_d/ | <
    Nolan> `-----------------------------------------------------------------------------'
    Nolan> Excerpt from a conversation between a customer
    Nolan> support person and a customer working for a
    Nolan> well-known military-affiliated research lab:
    Nolan> Support: "You're not our only customer, you
    Nolan> know."  Customer: "But we're one of the few with
    Nolan> tactical nuclear weapons."

    Nolan> --- Send your message for blinux-list to
    Nolan> blinux-list@redhat.com Blinux software archive at
    Nolan> ftp://leb.net/pub/blinux Blinux web page at
    Nolan> http://leb.net/blinux To unsubscribe send mail to
    Nolan> blinux-list-request@redhat.com with subject line:
    Nolan> unsubscribe

-- 
Best Regards,
--raman

      Adobe Systems                 Tel: 1 408 536 3945   (W14-128)
      Advanced Technology Group     Fax: 1 408 537 4042 
      W14-128 345 Park Avenue     Email: raman@adobe.com 
      San Jose , CA 95110 -2704     Email:  raman@cs.cornell.edu
      http://labrador.corp.adobe.com/~raman/        (Adobe Intranet)
      http://cs.cornell.edu/home/raman/    (Cornell)
----------------------------------------------------------------------
    Disclaimer: The opinions expressed are my own and in no way should be taken
as representative of my employer, Adobe Systems Inc.
____________________________________________________________


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Interfaces (Was Re: speech standard)
                 ` speech standard T. V. Raman
@                  ` Nolan Darilek
  0 siblings, 0 replies; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: raman; +Cc: blinux-list

   Date: Fri, 12 Mar 1999 08:26:16 -0800 (PST)
   Reply-To: raman@Adobe.COM
   From: "T. V. Raman" <raman@Adobe.COM>
   Resent-From: blinux-list@redhat.com
   Precedence: list
   Resent-Sender: blinux-list-request@redhat.com

   the assertion  that Emacspeak modifies the underlying lisp
   source of the application is false --in fact one of its
   unique implementation features to speech-enable apps without
   modifying their source.

Hmm. I think what I was doing was referring to the functions
w3-table-speak-*. Though, in retrospect, I could see how these would
best be integrated into the application, with direct access to the
HTML source. It also seems to serve as an example about why Emacspeak
isn't a screen reader, since the entire table may not be on the
screen. Or, at least, so I thought as Slashdot's news scrolled past
this morning; funny what a little caffeine and thought can reveal. :)

   If you're interested in learning about it there is plenty of
   info on how it is done on my WWW site.

Cool. I'll give that a glance before beginning any serious design
work. I remember reading the article about speech-enabling Tetris and
thought it was good. As an aside, I've been trying to think of how to
speech-enable something like sail. Whereas in Tetris you have
relatively predictable actions, pieces falling from the sky, in sail
you have ships closing, firing shots, attempting to board, etc. in
realtime. It'd be interesting to try to determine what sighted users
see when they examine a tactical strategy game. I'm sure they can't
take in everything in a single glance, or if they can, that they can't
make accurate decisions based on all of the information. Anyhow,
that's something which I've been considering for awhile; I'd love to
be able to play a multi-player strategy game or two. :)

BTW, I removed TM from my system completely, since I don't use
mime. Hopefully that will fix the attachment problems. Maybe I'll
reinstall it and delve into the problem next week.

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
                             REAL criminal reports.
         Dave so-and-so of Anniston, Alabama, was injured recently after he
           attempted to replace a tubelike fuse in his Chevy pickup with a
        22-caliber rifle bullet (used because it was a perfect fit). However,
         when electricity heated the bullet, it went off and shot him in the
                                        knee.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                   ` Nolan Darilek
@                    ` Whistler
  0 siblings, 0 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

 I have to add my two cents to this reply.  I to haven't seen a big gain
with emacspeak.  In fact I was pissed off for a long time because running
emacspeak through a ethernet with Asap in dos I was able to see the error
messages popping up in other windows of emacs but emacspeak was silent.
When I got locked up in emacs because i was not very good at it I could
read the screen with my Dos screen reader and figure out why with
emacspeak I wasn't able to figure out why things weren't working.  The
reason is it decides what it wants you to read.  

So many people have said old Dos Screen readers are stupid and take up to
much memory but my ASAP took up 54k  has artifical Intelegence that picks
up on colums and tables and you are able to set up set files for every
application you want to run.  Asap can take care f interactive programs
like nothing I ahve found in Windows  JFW realy sucks when it comes to
trying to use telnet.
That brings me back to emacspeak.  If all I needed to do is run programs
on remote servers that would be simple you just telnet in do whatever
through emacs and log off.  Where emacspeak in my opinion has failed and
has yet to be made to work satisfactory is in interacting with other
users.  For example if I wanted to do something like talk to other users
on line or log on to my game server using emacs the problem is it doesn't
track scrolling screans at all I find myself having to re-read the entire
screen instead of listening to the text as it comes in.  

Raman and other profesional users of emacs may have ways of rigging emacs
to do this but I ahve wrote many posts asking how I can do it and got zero
responses.  If Dos screen readers are so stupid why then can I do this
simple with my Dos screen reader and not with emacspeak.  

My whole point is not to say Emacspeak is a bad solution I am saying it is
not the only solution and while in a perfect world Emacspeak has the right
idea this is no perfect world and I want to be able to use the
applications I need to use not the applications I have to use.

Its great that I can play tetris but what supprises me as of yet I can not
get my infocom games with frotz to work good with emacspeak why is that?
Maybe I need an Emacs lisp package that will run the zcode games I don't
know.

For those of you who think I hate Emacspeak that is not even close to true
I actually just installed it for my college and I suggest it to every
blind user I work with as a Instructor for people with disabilities at the
college but what I wouldn't give for the power of ASAP in Linux.  It may
even help build a better emacs to have something like Speak-up and Svlpro
to use when emacs doesn't cut the mustard.

Ken /whistler

On 12 Mar 1999, Nolan Darilek wrote:

>    Date: Fri, 12 Mar 1999 16:39:31 +0100 (CET)
>    From: Hans Zoebelein <zocki@goldfish.cube.net>
> 
>    NO! A screen reader reads the screen. Emacspeak is an environment with
>    built in intelligence which even adds (or omits) output 
>    to improve your interaction with your workstation. You get a better grip
>    on your box. 
> 
> How is this different from, say, vocal-eyes? With VE, I could
> configure an application to speak its menubars when I hit alt. This,
> to me, is some level of 'built-in intelligence'. Similarly,
> multi-columned output could be implemented by dynamically creating two
> windows based on screen content, and allowing each column to be
> read. Emacspeak improves on these concepts, but I think that they have
> been present in many previous screen readers, albeit to a lesser
> extent. Emacspeak refers to them as Elisp modifications, advice, etc.;
> VE refers to set files.
> 
>    You could rewrite the commands as you want them to be named. Its only that
>    programmers are lazy and normally rewrite code which is running... 
> 
> The classes used say() methods which transmitted the string directly to
> the pipe, prepending it with 'dectalk_say' and checking for
> errors. I'm hesitant to modify existing speech servers because then
> you create diverging standards.
> 
>    It's on your fingertips if you own a CD-ROM burner..
> 
> Heh, don't have the money. (Actually I do, but I've got my sites set on
> one of those Diamond Rio portable MP3 players. :)
> 
> ,-----------------------------------------------------------------------------.
> >              Nolan Darilek             |                                    <
> >           nolan_d@bigfoot.com          |    University of Texas at Austin   <
> >         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
> >              ICQ: 15709478             |      Orientation Advisor, 1999     <
> >    http://www.bigfoot.com/~nolan_d/    |                                    <
> `-----------------------------------------------------------------------------'
>                              REAL criminal reports.
>           Drug-possession defendant Christopher Johns, on trial in March in
>          Pontiac, Michigan, said he had been searched without a warrant. The
>        prosecutor said the officer didn't need a warrant because of a "bulge"
>             in Christopher's jacket could have been a gun. Nonsense, said
>          Christopher, who happened to be wearing the same jacket that day in
>             court. He handed it over so the judge could see it. The judge
>          discovered a packet of cocaine in the pocket and laughed so hard he
>                   required a five-minute recess to compose himself.
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Speech-enabling approach (was Re: speech standard)
                 ` Hans Zoebelein
                   ` Nolan Darilek
@                  ` Jason White
                     ` Whistler
  1 sibling, 1 reply; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

Part of the flexibility of Emacspeak lies in its ability to examine
internal data structures of the application which are not apparent from
the visual interface, and provide therefrom an appropriate auditory
rendering. This "speech-enabling" approach is discussed extensively in T.
V. Raman's publications describing Emacspeak
(http://www.cs.cornell.edu/home/raman/publications/). To contrast
Emacspeak with screen readers and illustrate the advantage of the "speech
enabling" approach, T. V. Raman discusses the example of the Emacs
calendar, which presents data on the visual display in a tabular form that
would be difficult for a screen reader user to comprehend. By contrast,
Emacspeak can read the underlying variables (for example the variable that
contains the current date etc.) and thus offer an efficient auditory
presentation.

The more complex a user interface becomes, the more difficult it is to
provide an auditory or braille representation simply by relying on the
visual interface as a starting point. Another example of this is the world
wide web and the importance of the Document Object Model as a means of
providing semantic distinctions that are not conveyed in the visual
interface.



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                     ` Whistler
@                      ` Jason White
                         ` Whistler
                         ` Internal data (was Re: Speech-enabling approach) T.Pospisek's MailLists
  0 siblings, 2 replies; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

To take the calendar example, a screen reader could be programmed to
locate the column headers and read them automatically whenever the cursor
moved, but this still wouldn't provide the same fluency as Emacspeak does
when the latter announces the complete date of the calendar entry.
Furthermore, if one moves beyond the specific calendar context and
considers tables more generically (including the case of cells which span
rows and columns, column groups, the use of a particular column as a
series of de facto headers, etc.), it soon becomes clear that the
conventional "screen reading" approach can't cope with the complexity
inherent in the data that are being presented by the interface. Consider
mathematical notation, forms as encountered on web pages (where the
placement of the label relative to the entry field is not uniform), etc.,
it soon becomes clear that monitoring of the visual interface does not
provide sufficient information to permit an equivalently effective
auditory interface to be constructed.

One has to look beyond a particular example and consider the harder cases
before drawing conclusions. 



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                   ` Speech-enabling approach (was Re: speech standard) Jason White
@                    ` Whistler
                       ` Jason White
  0 siblings, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list




As I have previously posted I have read the book and I have read
everything I could find of Ramans ideas and I use Emacspeak.  I always
have to laugh when I see the Calander post though cause I used a Calander
just fine in dos with ASAP not all speech programs were stupid and some
could learn the application quickly.  

Ken /whistler

On Sat, 13 Mar 1999, Jason White wrote:

> Part of the flexibility of Emacspeak lies in its ability to examine
> internal data structures of the application which are not apparent from
> the visual interface, and provide therefrom an appropriate auditory
> rendering. This "speech-enabling" approach is discussed extensively in T.
> V. Raman's publications describing Emacspeak
> (http://www.cs.cornell.edu/home/raman/publications/). To contrast
> Emacspeak with screen readers and illustrate the advantage of the "speech
> enabling" approach, T. V. Raman discusses the example of the Emacs
> calendar, which presents data on the visual display in a tabular form that
> would be difficult for a screen reader user to comprehend. By contrast,
> Emacspeak can read the underlying variables (for example the variable that
> contains the current date etc.) and thus offer an efficient auditory
> presentation.
> 
> The more complex a user interface becomes, the more difficult it is to
> provide an auditory or braille representation simply by relying on the
> visual interface as a starting point. Another example of this is the world
> wide web and the importance of the Document Object Model as a means of
> providing semantic distinctions that are not conveyed in the visual
> interface.
> 
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                       ` Jason White
@                        ` Whistler
                           ` Jason White
                         ` Internal data (was Re: Speech-enabling approach) T.Pospisek's MailLists
  1 sibling, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

I am sorry but I have lets see graduated calculus, physics, and technical
writing classes all with those stupid speech programs you speak of.  I
have never had a problem that took me over 30 minutes to solve with a Dos
screen reader and I feel it would be the same in this case once speak up
and Svlpro get to the power of the Asap speech program I know this  cause
right now ASAP is what I use to access Linux at home.  I only use
emacspeak at college and on my laptop when I am on the go.  In fact I am
writing this with Pine and have never had a problem knowing what to read
and what not to read.  
You say we must think of the complex problems fine if we must but then we
also must think of problems that emacspeak has not solved yet and I have
listed them previous and still no one has answered me Here it is again.

I want t use interactive programs in emacs with the ease I use them in
Asap.  That means it should take care of auto scrolling screans with out
me having to re-read the screen.  Someone explain to me how I can do that
and I will start changing mymind about it.

Ken /whistler

On Sat, 13 Mar 1999, Jason White wrote:

> To take the calendar example, a screen reader could be programmed to
> locate the column headers and read them automatically whenever the cursor
> moved, but this still wouldn't provide the same fluency as Emacspeak does
> when the latter announces the complete date of the calendar entry.
> Furthermore, if one moves beyond the specific calendar context and
> considers tables more generically (including the case of cells which span
> rows and columns, column groups, the use of a particular column as a
> series of de facto headers, etc.), it soon becomes clear that the
> conventional "screen reading" approach can't cope with the complexity
> inherent in the data that are being presented by the interface. Consider
> mathematical notation, forms as encountered on web pages (where the
> placement of the label relative to the entry field is not uniform), etc.,
> it soon becomes clear that monitoring of the visual interface does not
> provide sufficient information to permit an equivalently effective
> auditory interface to be constructed.
> 
> One has to look beyond a particular example and consider the harder cases
> before drawing conclusions. 
> 
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                         ` Whistler
@                          ` Jason White
                             ` Dave Mielke
                                             ` (2 more replies)
  0 siblings, 3 replies; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

So long as the application in question is an Emacs extension that works
with Emacspeak, then the scrolling of the screen is irrelevant: one simply
instructs Emacspeak to read the contents of the buffer. When Emacspeak is
acting as a screen reader within the terminal emulator, the benefit of its
unique approach is lost, as it is simply reading the text that appears in
the terminal window.

Fortunately, Emacs, together with its extensions, handle most common
computing tasks very effectively: text editing, web browsing, typesetting
(via LaTeX), mail and news reading, providing access to the shell and
command line tools, file management, a spreadsheet, a calendar and diary,
software development for programmers, and so forth.

Thus, while I do not maintain that Emacspeak is perfect, I would argue
that it is an excellent solution for many purposes, and that its
underlying design principles can and should be adopted and adapted to make
other computing environments convenient to use in a variety of media,
including audio and braille.



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                           ` Jason White
@                            ` Dave Mielke
                               ` Jason White
                             ` Whistler
                             ` yvonne
  2 siblings, 1 reply; 77+ messages in thread
From: Dave Mielke @  UTC (permalink / raw)
  To: blinux-list

On Sat, 13 Mar 1999, Jason White wrote:
>Fortunately, Emacs, together with its extensions, handle most common
>computing tasks very effectively: 

Promoting a particular interface as "all that is realistically needed" just
because a lot of software has already been written to conform to its own
particular small view of things is a fallacy. Each user should be free to pick
whichever tool is best for each particular job. It would only be a valid line
of reasoning if another, even more important, aspect were true, i.e. that the
writers of the software which is compliant to that interface always only ever
did a better job than those who write other variants of that same software.

While it is of course true that the EmacSpeak approach can more easily do a
better job, so long as the user only ever uses applications which have been
written as Emacs extensions, it is a fallacy to believe either that Elisp is
the best language in which to implement any given tool or that the Elisp
writers always do the best job. Given this self-evident state of affairs, why
would anyone do anything which would stifle attempts by others to try to come
up with a better idea? It is true that their ideas may seem very feeble, and
even unrealistic, at first, but, if they're not permitted to grow, the state of
affairs will never improve and we'll be stuck with the status quo.

-- 
Dave Mielke           | 856 Grenon Avenue | I believe that the Bible is the
Phone: 1-613-726-0014 | Ottawa, Ontario   | Word of God. Please contact me
EMail: dave@mielke.cc | Canada  K2B 6G3   | if you're concerned about Hell.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                           ` Jason White
                             ` Dave Mielke
@                            ` Whistler
                               ` Jason White
                             ` yvonne
  2 siblings, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

While your reply makes another good advertisement for Emacpseak I don't
think Emacspeak needs another good advertisement.  I have said I like it I
have said it does many htings but you haven't seemed to be able to tell me
what I need to do in linux and that is the interaction with other users.
You say you can have it re read the buffer that is not what I want it to
do I want it to read as text comes in and not re read it everytime the
screen scrolls.  I am willing to wlak you through what I need to be able
to do if you can tell me emacspeak works then great other wise stop the
advertising and answer the question.

Ken /whistler On Sat, 13 Mar 1999, Jason White
wrote:

> So long as the application in question is an Emacs extension that works
> with Emacspeak, then the scrolling of the screen is irrelevant: one simply
> instructs Emacspeak to read the contents of the buffer. When Emacspeak is
> acting as a screen reader within the terminal emulator, the benefit of its
> unique approach is lost, as it is simply reading the text that appears in
> the terminal window.
> 
> Fortunately, Emacs, together with its extensions, handle most common
> computing tasks very effectively: text editing, web browsing, typesetting
> (via LaTeX), mail and news reading, providing access to the shell and
> command line tools, file management, a spreadsheet, a calendar and diary,
> software development for programmers, and so forth.
> 
> Thus, while I do not maintain that Emacspeak is perfect, I would argue
> that it is an excellent solution for many purposes, and that its
> underlying design principles can and should be adopted and adapted to make
> other computing environments convenient to use in a variety of media,
> including audio and braille.
> 
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                             ` Dave Mielke
@                              ` Jason White
  0 siblings, 0 replies; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

On Sat, 13 Mar 1999, Dave Mielke wrote:

> On Sat, 13 Mar 1999, Jason White wrote:
> >Fortunately, Emacs, together with its extensions, handle most common
> >computing tasks very effectively: 
> 
> Promoting a particular interface as "all that is realistically needed" just
> because a lot of software has already been written to conform to its own
> particular small view of things is a fallacy. Each user should be free to pick
> whichever tool is best for each particular job. It would only be a valid line
> of reasoning if another, even more important, aspect were true, i.e. that the
> writers of the software which is compliant to that interface always only ever
> did a better job than those who write other variants of that same software.

This argument is perfectly consistent with my statements as quoted above,
which is why I think that the design principles developed in Emacspeak
should be further developed and applied in other environments. The recent
discussion of an access module for the Gnome desktop is a case in point:
the access is being built directly into the environment rather than into
an external screen reader, and it is possible to take advantage of the
rich data structures of applications to provide a better auditory
interface -- precisely one of the central features of Emacspeak, the
advantages of which I have been emphasizing in these discussions.
 > 
> While it is of course true that the EmacSpeak approach can more easily do a
> better job, so long as the user only ever uses applications which have been
> written as Emacs extensions, it is a fallacy to believe either that Elisp is
> the best language in which to implement any given tool or that the Elisp
> writers always do the best job. Given this self-evident state of affairs, why
> would anyone do anything which would stifle attempts by others to try to come
> up with a better idea?

This is the first occasion in this discussion in which anyone has even
implied an attempt to stifle development in any direction, but there is a
distinction between good, future-oriented design that will take advantage
of modern and forthcoming user interfaces, and the conventional screen
reader approach which, as you agree (in the passage quoted above) does not
lead to better non-visual interfaces, being necessarily dependent entirely
on the visual presentation. If individuals wish to pursue the latter kind
development then they are of course free to do so (no one ever suggested
otherwise), but users would be better served by a more innovative
approach whereby the facilities required to generate high quality braille
and auditory interaction are integrated into contemporary and developing
user interfaces. Gnome, KDE, Java and, a fortiori, the X Window System,
come to mind in particular, recalling the suggestions made by Jim Gettys
in his article on the subject, which was discussed here some months ago.



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                             ` Whistler
@                              ` Jason White
                                 ` Whistler
  0 siblings, 1 reply; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

Try shell mode (M-x shell), which provides speech output whenever text is
sent to the terminal. It should not be used with applications that send
terminal-oriented escape sequences. There is probably a variable that
would enable text sent to the terminal in Eterm mode to be announced
automatically, but the best solution with Emacspeak is to find an Emacs
extension that performs the desired function.



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                                 ` Whistler
@                                  ` Hans Zoebelein
                                     ` Whistler
                                     ` speech standard Dave Mielke
                                   ` Speech-enabling approach (was Re: speech standard) Ann K. Parsons
  1 sibling, 2 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list; +Cc: blinux-list-zocki

Ken,

could you explain to me what you mean with 
"Interactive telnets with 10 to 100 people on at a time".
Do you mean IRC chat?

--Hans


On Sat, 13 Mar 1999, Whistler wrote:

> 
> I have used both shell and term and neither correctly handle the task but
> my stupid Dos screen handles the task perfectly.  As for your suggestion
> that I find something that runs in emacs and handles the task that is
> exactly what I am trying to do.  That goes to show you though that while
> Emacspeak would be perfect in a perfect world it is not a perfect world
> and we have to have both environments in fact we should have 10 or 20
> different types of speech and braille interfaces.  It was again said in a
> previous email that we should support things like Gnome over older speech
> software like old Dos softwares.  That again locks us into Gnome like we
> are being Locked into Emacspeak.  Just because something is different from
> what you think should be done doesn't mean it is not needed.  I am done
> arguing the point mainly because I freely admit that Emacspeak does some
> things great and I support it when those times arrise.  I wish people that
> support emacspeak would do the same with the other styles of speech
> software because weather they like it or not their solution is not perfect
> and like most software it never will be perfect for every solution.  My
> needs right now are for a software that can handle Interactive telnets
> with 10 to 100 people on at a time.  Emacspeak doesn't fit the bill my Dos
> software does and if Svlpro and Speak up are going to matcht hat Dos
> software then great.  
> 
> Notice I have heard no response from Raman on how I could accomplish my
> task with the Emacspeak system you have suggested two things both of which
> work poorly at best.  If the system can be so easily made to work for all
> the applications under emacs then why can I not have more control over the
> interactive telnets?
> 
> Ken /whsitler
kk


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                               ` Jason White
@                                ` Whistler
                                   ` Hans Zoebelein
                                   ` Speech-enabling approach (was Re: speech standard) Ann K. Parsons
  0 siblings, 2 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list


I have used both shell and term and neither correctly handle the task but
my stupid Dos screen handles the task perfectly.  As for your suggestion
that I find something that runs in emacs and handles the task that is
exactly what I am trying to do.  That goes to show you though that while
Emacspeak would be perfect in a perfect world it is not a perfect world
and we have to have both environments in fact we should have 10 or 20
different types of speech and braille interfaces.  It was again said in a
previous email that we should support things like Gnome over older speech
software like old Dos softwares.  That again locks us into Gnome like we
are being Locked into Emacspeak.  Just because something is different from
what you think should be done doesn't mean it is not needed.  I am done
arguing the point mainly because I freely admit that Emacspeak does some
things great and I support it when those times arrise.  I wish people that
support emacspeak would do the same with the other styles of speech
software because weather they like it or not their solution is not perfect
and like most software it never will be perfect for every solution.  My
needs right now are for a software that can handle Interactive telnets
with 10 to 100 people on at a time.  Emacspeak doesn't fit the bill my Dos
software does and if Svlpro and Speak up are going to matcht hat Dos
software then great.  

Notice I have heard no response from Raman on how I could accomplish my
task with the Emacspeak system you have suggested two things both of which
work poorly at best.  If the system can be so easily made to work for all
the applications under emacs then why can I not have more control over the
interactive telnets?

Ken /whsitler

On Sun, 14 Mar 1999, Jason White wrote:

> Try shell mode (M-x shell), which provides speech output whenever text is
> sent to the terminal. It should not be used with applications that send
> terminal-oriented escape sequences. There is probably a variable that
> would enable text sent to the terminal in Eterm mode to be announced
> automatically, but the best solution with Emacspeak is to find an Emacs
> extension that performs the desired function.
> 
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                           ` Jason White
                             ` Dave Mielke
                             ` Whistler
@                            ` yvonne
                               ` Whistler
                               ` Speech-enabling approach (was Re: speech standard) Hans Zoebelein
  2 siblings, 2 replies; 77+ messages in thread
From: yvonne @  UTC (permalink / raw)
  To: blinux-list

Ok, I've had enough. I was more than happy to sit and watch the Blinux
list, since I am by no means a developer and am a farely new linux
user, but this is just insane.

How did we go from a perfectly legitimate discussion to what verges on
a flame war about which screen reader or approach to producing speech
is best? Who *cares????* Is there *one* screen reader for dos? No. Is
there one screen reader for windows? No. So, therefore, why should
there be one screen reader for Linux? No reason in the world. 

I thought that what this speech standard was meant to achieve was to
improve on DOS and Windows. Whereas in windows or DOS we are
restricted to the screen readers that support our hardware, with this
new standard, no matter whether one or fifty people develop screen
readers or quote speech subsystems unquote, *everyone* would be able
to use them. Or, at the very least, if a person could persuade someone
to write a driver for their system, they would be able to use all the
options, not just the one that their speech synthesiser maker
endorses. Have I completely missed the point, here? Or is it really
necessary for us to squabble over which speech package we like?

I have used DOS screen readers. I have used windows screen readers, I
have used stand-alone computer note takers and I have accessed Linux
with other computers through modems and networks over a period of 10
or 11 years, so I think I at least have some experience in the
matter. Most recently I'm using Emacspeak, since it worked farely
easily, has a driver for my speech synthesiser, and I was sick of
accessing Linux through other computers.

I'm planning on taking a look at SVLpro shortly, and I'm dying to look
at Speakup and the server for Gnome just as soon as there are drivers
that will let me access them. I may love one of them. I may hate them
all and get a friend to write something else, but that is completely
irrelevant. How  many different window managers are there for Linux?
Has anyone ever counted? But they're all just different approaches to
doing the same thing.

Why don't we start trying to come up with the speech driver standard,
rather than having the speech subsystem camp at the throat of the
screen reader camp? In the long run I think this will be far more
productive.

Seeya
Yvonne
P.S. Sorry about that. I'm not generally speaking prone to ranting
like that <grin>. Maybe I've just been reading way too much mail today.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                                   ` Hans Zoebelein
@                                    ` Whistler
                                       ` Ann K. Parsons
                                     ` speech standard Dave Mielke
  1 sibling, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

No I mean muds.  and other games of the sort.  Yes it would work a lot
like IRC chat but no it is not the same.  You can use normal telnet or mud
clients like Tintin in Linux.  When I try this through Emacspeak it is a
big mess.  I have someone in this group that is also going to give it a
shot for those who are willing to help me out and find a way for emacspeak
to solve this problem the way you access the game is by

telnet valhalla.com 4242

It is full text and gives my Dos speech and NCSA telnet client over
ethernet no problem at all.

Ken /whistler

On Sun, 14 Mar 1999, Hans Zoebelein wrote:

> Ken,
> 
> could you explain to me what you mean with 
> "Interactive telnets with 10 to 100 people on at a time".
> Do you mean IRC chat?
> 
> --Hans
> 
> 
> On Sat, 13 Mar 1999, Whistler wrote:
> 
> > 
> > I have used both shell and term and neither correctly handle the task but
> > my stupid Dos screen handles the task perfectly.  As for your suggestion
> > that I find something that runs in emacs and handles the task that is
> > exactly what I am trying to do.  That goes to show you though that while
> > Emacspeak would be perfect in a perfect world it is not a perfect world
> > and we have to have both environments in fact we should have 10 or 20
> > different types of speech and braille interfaces.  It was again said in a
> > previous email that we should support things like Gnome over older speech
> > software like old Dos softwares.  That again locks us into Gnome like we
> > are being Locked into Emacspeak.  Just because something is different from
> > what you think should be done doesn't mean it is not needed.  I am done
> > arguing the point mainly because I freely admit that Emacspeak does some
> > things great and I support it when those times arrise.  I wish people that
> > support emacspeak would do the same with the other styles of speech
> > software because weather they like it or not their solution is not perfect
> > and like most software it never will be perfect for every solution.  My
> > needs right now are for a software that can handle Interactive telnets
> > with 10 to 100 people on at a time.  Emacspeak doesn't fit the bill my Dos
> > software does and if Svlpro and Speak up are going to matcht hat Dos
> > software then great.  
> > 
> > Notice I have heard no response from Raman on how I could accomplish my
> > task with the Emacspeak system you have suggested two things both of which
> > work poorly at best.  If the system can be so easily made to work for all
> > the applications under emacs then why can I not have more control over the
> > interactive telnets?
> > 
> > Ken /whsitler
> kk
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                                   ` Hans Zoebelein
                                     ` Whistler
@                                    ` Dave Mielke
                                       ` Nicolas Pitre
  1 sibling, 1 reply; 77+ messages in thread
From: Dave Mielke @  UTC (permalink / raw)
  To: blinux-list; +Cc: blinux-list-zocki

I would like to step out on a limb and, based on the previous discussions, dare
to propose a standard for the operation of speech synthesizers in Linux. If
done right, it could (should) become the standard for all Unix systems, if not
all operating systems. I think there is enough information at hand to do this,
and all that is lacking is for someone to take a position so that an actual
direction will be taken so, for better or worse, whatever anyone may think,
whether or not I appear to be foolish, here goes:

The intention of this standard is to provide a common mid-point protocol, so
that speech access applications do not have to understand how to drive speech
synthesizers. This would leave a user free to use whichever speech access
application he wishes, and to use whichever speech synthesizer, hardware or
software, he wishes, without having to be concerned with whether the
application supports the synthesizer. It would also encourage vendours to
supply the drivers for their synthesizers, as they would know that this extra
effort would not be wasted.

The EmacSpeak servers should be used to drive all of the various hardware and
software speech synthesizers. They already exist, they're quite versatile,
they've been refined and proven over time, and EmacSpeak, the most complex and
extensive application which already uses them, would not have to be modified.
Other speech access applications (screader, svlpro) would, but that would
probably not be too difficult of a task.

Kernel resident speech access applications, e.g. SpeakUp, as well as any other
needed and/or desired speech access within the kernel, could easily be
supported too because it is not necessary for the servers to be right in the
kernel just because the kernel has need of them. The kernel is able to
communicate over a socket to software residing in a server which runs as its
own external process. Automounters and NFS servers are examples of this kind of
approach.

These servers should be renamed to reflect their more general role. To get the
voring started, I'll throw out the potential name "synthserv".

A different server would be required for each unique synthesizer model. Many
have already been implemented. Servers would still need to be implemented for
the remaining unsupported hardware synthesizers, and servers should be written
for each known software speech synthesizer (festival, mbrolla, etc.).

An SSIL interface which communicates with these servers should be written to
facilitate the easy porting of other, perhaps even vendour-supplied, speech
access applications.

A single person, or a very small group of people, must be selected to be the
keeper of this standard. The keeper must be granted the authority to have the
final say with respect to what is in and what is out, as that's the only way to
insure that there is a standard. The keeper must be committed to an open mind,
must not have any private agendas in this area, must be able to wisely consider
the long term ramifications of suggestions made by others, and must have great
respect for contributions made by others without necessarily being unreasonably
swayed by them.

-- 
Dave Mielke           | 856 Grenon Avenue | I believe that the Bible is the
Phone: 1-613-726-0014 | Ottawa, Ontario   | Word of God. Please contact me
EMail: dave@mielke.cc | Canada  K2B 6G3   | if you're concerned about Hell.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* please think before hitting reply...
                               ` Whistler
@                                ` Hans Zoebelein
  0 siblings, 0 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list; +Cc: whistler

a example how _not_ to reply. Here we have one line of 
text,  one line of signature and sixty six lines of quoted text.


--Hans


On Sat, 13 Mar 1999, Whistler wrote:

> Agreed.
> 
> Ken /whsitler
> 
> On Sun, 14 Mar 1999 yvonne@fl.net.au wrote:
[64 lines kicked] 
> > with subject line: unsubscribe
> > 
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                             ` yvonne
@                              ` Whistler
                                 ` please think before hitting reply Hans Zoebelein
                               ` Speech-enabling approach (was Re: speech standard) Hans Zoebelein
  1 sibling, 1 reply; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list

Agreed.

Ken /whsitler

On Sun, 14 Mar 1999 yvonne@fl.net.au wrote:

> Ok, I've had enough. I was more than happy to sit and watch the Blinux
> list, since I am by no means a developer and am a farely new linux
> user, but this is just insane.
> 
> How did we go from a perfectly legitimate discussion to what verges on
> a flame war about which screen reader or approach to producing speech
> is best? Who *cares????* Is there *one* screen reader for dos? No. Is
> there one screen reader for windows? No. So, therefore, why should
> there be one screen reader for Linux? No reason in the world. 
> 
> I thought that what this speech standard was meant to achieve was to
> improve on DOS and Windows. Whereas in windows or DOS we are
> restricted to the screen readers that support our hardware, with this
> new standard, no matter whether one or fifty people develop screen
> readers or quote speech subsystems unquote, *everyone* would be able
> to use them. Or, at the very least, if a person could persuade someone
> to write a driver for their system, they would be able to use all the
> options, not just the one that their speech synthesiser maker
> endorses. Have I completely missed the point, here? Or is it really
> necessary for us to squabble over which speech package we like?
> 
> I have used DOS screen readers. I have used windows screen readers, I
> have used stand-alone computer note takers and I have accessed Linux
> with other computers through modems and networks over a period of 10
> or 11 years, so I think I at least have some experience in the
> matter. Most recently I'm using Emacspeak, since it worked farely
> easily, has a driver for my speech synthesiser, and I was sick of
> accessing Linux through other computers.
> 
> I'm planning on taking a look at SVLpro shortly, and I'm dying to look
> at Speakup and the server for Gnome just as soon as there are drivers
> that will let me access them. I may love one of them. I may hate them
> all and get a friend to write something else, but that is completely
> irrelevant. How  many different window managers are there for Linux?
> Has anyone ever counted? But they're all just different approaches to
> doing the same thing.
> 
> Why don't we start trying to come up with the speech driver standard,
> rather than having the speech subsystem camp at the throat of the
> screen reader camp? In the long run I think this will be far more
> productive.
> 
> Seeya
> Yvonne
> P.S. Sorry about that. I'm not generally speaking prone to ranting
> like that <grin>. Maybe I've just been reading way too much mail today.
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                                     ` speech standard Dave Mielke
@                                      ` Nicolas Pitre
                                         ` James R. Van Zandt
  0 siblings, 1 reply; 77+ messages in thread
From: Nicolas Pitre @  UTC (permalink / raw)
  To: Dave Mielke; +Cc: blinux-list, blinux-list-zocki

On Sat, 13 Mar 1999, Dave Mielke wrote:

[...]
> The EmacSpeak servers should be used to drive all of the various hardware and
> software speech synthesizers. They already exist, they're quite versatile,
> they've been refined and proven over time, and EmacSpeak, the most complex and
> extensive application which already uses them, would not have to be modified.
> Other speech access applications (screader, svlpro) would, but that would
> probably not be too difficult of a task.

Agreed.
However I would like to see the word (dectalk" be removed from all
commands so it could be generic.

> Kernel resident speech access applications, e.g. SpeakUp, as well as any other
> needed and/or desired speech access within the kernel, could easily be
> supported too because it is not necessary for the servers to be right in the
> kernel just because the kernel has need of them. The kernel is able to
> communicate over a socket to software residing in a server which runs as its
> own external process. Automounters and NFS servers are examples of this kind of
> approach.

However the only reason and motivation to have speech in the kernel is to
allow for speech to be usable independently of what is in user space i.e.
user space can even be absent and Speakup will work anyway.  Otherwise
there is no advantage to put a screen reader in the kernel if a user space
component is required.  Therefore kernel approach like Speakup will have
to be self-contained.  This is a very powerful approach but very very
costly regarding the maintenance.  Anyway all this to say that Speakup has
no reason to stay in the kernel if it requires external drivers.

> These servers should be renamed to reflect their more general role. To get the
> voring started, I'll throw out the potential name "synthserv".
[...]
> A single person, or a very small group of people, must be selected to be the
> keeper of this standard. The keeper must be granted the authority to have the
> final say with respect to what is in and what is out, as that's the only way to
> insure that there is a standard. The keeper must be committed to an open mind,
> must not have any private agendas in this area, must be able to wisely consider
> the long term ramifications of suggestions made by others, and must have great
> respect for contributions made by others without necessarily being unreasonably
> swayed by them.

Anyone is willing to volunteer?
I suggest you Dave as a potential candidate.


Nicolas Pitre, B. ing.
nico@cam.org



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                             ` yvonne
                               ` Whistler
@                              ` Hans Zoebelein
  1 sibling, 0 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

Yvonne,

as you may know, blinux-list is available for nearly three years now.
The results so far are more than disappointing. We have only a hand full
of people who are capable to write reasonable code.  The list is
degenerated into a  meeting place where you go when you are proud of a
blown up ego "..go out out my way.." or  if you think it is handy to hit
the "send" button when you have any problem with your software "..what
can I do to let Netscape...?" 

You may be happy to sit and watch Blinux list, others are not. One of our
problems is that everybody thinks "this is my list". The newbee who
considers to start with Linux and the experienced developer who eventually
wants to contribute to a standard speech server. Finally everybody is 
unhappy. So we have to change this setup. 

--Hans


On Sun, 14 Mar 1999 yvonne@fl.net.au wrote:

> Ok, I've had enough. I was more than happy to sit and watch the Blinux
> list, since I am by no means a developer and am a farely new linux
> user, but this is just insane.
> 
> How did we go from a perfectly legitimate discussion to what verges on
> a flame war about which screen reader or approach to producing speech
> is best? Who *cares????* Is there *one* screen reader for dos? No. Is
> there one screen reader for windows? No. So, therefore, why should
> there be one screen reader for Linux? No reason in the world. 
[stuff omitted] 
 




^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                                 ` Whistler
                                   ` Hans Zoebelein
@                                  ` Ann K. Parsons
  1 sibling, 0 replies; 77+ messages in thread
From: Ann K. Parsons @  UTC (permalink / raw)
  To: blinux-list

Hi Ken,

If you're looking for interactive telnets, It sounds like you're
working with MOO or MUDD or something similar.  If this is the case,
Matt Campbell has developed a speech package for RMOO, a Linux app.
If this is what you're looking for, write me or write Matt at:
mattcamp@feist.com

Ann P.

-- 
Ann K. Parsons:  email:  akp@eznet.net
web site:  http://home.eznet.net/~akp
MICQ Number:  33006854
"All that is gold does not glitter.  Not all those who wander are lost."  JRRT


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Speech-enabling approach (was Re: speech standard)
                                     ` Whistler
@                                      ` Ann K. Parsons
  0 siblings, 0 replies; 77+ messages in thread
From: Ann K. Parsons @  UTC (permalink / raw)
  To: blinux-list

Hi all,

ah-ha!  Yes, yes, you want RMOO, that's what you want, RMOO!  Contact
Matt Campbell!

Hey, guys, I'm getting there.  I can finally help someone else!
Wahoo!  <grin>

Ann P.

-- 
Ann K. Parsons:  email:  akp@eznet.net
web site:  http://home.eznet.net/~akp
MICQ Number:  33006854
"All that is gold does not glitter.  Not all those who wander are lost."  JRRT


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                                       ` Nicolas Pitre
@                                        ` James R. Van Zandt
                                           ` Nicolas Pitre
  0 siblings, 1 reply; 77+ messages in thread
From: James R. Van Zandt @  UTC (permalink / raw)
  To: blinux-list


Starting with Emacspeak 8.0, many synthesizer commands are generic,
for example "tts_set_speech_rate".

However, there are also still a number of DECtalk commands, like this:
 [:np :dv sm  40  ri  40   hr  7   sr  10  ]
which switches to a variation on Paul's voice.

			- Jim Van Zandt


>Date: Sat, 13 Mar 1999 23:08:04 -0500 (EST)
>From: Nicolas Pitre <nico@CAM.ORG>
>
>On Sat, 13 Mar 1999, Dave Mielke wrote:
>
>[...]  The EmacSpeak servers should be used to drive all of the
>> various hardware and software speech synthesizers. They already
>> exist, they're quite versatile, they've been refined and proven
>> over time, and EmacSpeak, the most complex and extensive
>> application which already uses them, would not have to be modified.
>> Other speech access applications (screader, svlpro) would, but that
>> would probably not be too difficult of a task.
>
>Agreed.
>However I would like to see the word (dectalk" be removed from all
>commands so it could be generic.
...
>
>Nicolas Pitre, B. ing.
>nico@cam.org


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
                                         ` James R. Van Zandt
@                                          ` Nicolas Pitre
  0 siblings, 0 replies; 77+ messages in thread
From: Nicolas Pitre @  UTC (permalink / raw)
  To: blinux-list

On Sun, 14 Mar 1999, James R. Van Zandt wrote:

> 
> Starting with Emacspeak 8.0, many synthesizer commands are generic,
> for example "tts_set_speech_rate".
> 
> However, there are also still a number of DECtalk commands, like this:
>  [:np :dv sm  40  ri  40   hr  7   sr  10  ]
> which switches to a variation on Paul's voice.

All cases like this would have to become more generic, like the first
example below.

However, this requires collaboration from T. V. Ramman to succeed.


Nicolas Pitre, B. ing.
nico@cam.org



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                       ` Jason White
                         ` Whistler
@                        ` T.Pospisek's MailLists
                           ` Nolan Darilek
  1 sibling, 1 reply; 77+ messages in thread
From: T.Pospisek's MailLists @  UTC (permalink / raw)
  To: blinux-list

On 13 Mar, Jason White wrote:
> To take the calendar example, a screen reader could be programmed to
> locate the column headers and read them automatically whenever the cursor
> moved, but this still wouldn't provide the same fluency as Emacspeak does
> when the latter announces the complete date of the calendar entry.
> Furthermore, if one moves beyond the specific calendar context and
> considers tables more generically (including the case of cells which span
> rows and columns, column groups, the use of a particular column as a
> series of de facto headers, etc.), it soon becomes clear that the
> conventional "screen reading" approach can't cope with the complexity
> inherent in the data that are being presented by the interface.
> [cut]
> 
> One has to look beyond a particular example and consider the harder cases
> before drawing conclusions.

Take a sighted human being as a harder example of a screen reader which
does not have access to internal programm structures. Allthough this
person maybe doesn't use a "conventional screen reading approach" in the
pure sense, once can indeed argue that it --does-- succeed in correctly
interpreting the screen.

Access to internal data structures in not --absolutely-- necessary for
correct interpretation.

Nolan was arguing that you could teach a screenreader how to interpret
a screen by teaching him about the "patterns to be recognised" if I
might say so. I think this idea is very good, because allthough the
implementation might be infinitely difficult, if you succeed in having
a minimum of generality, that is to have some rudimental
"average-linux-application-screen-layout-pattern" recognition tools then
it'll broaden you're choice of useful programms a lot. And my opinion
is that it's all about choice - we wouldn't be here in blinux if it
wasn't so.

~
t

------------------------------------------------------------------------------
             Tomas Pospisek - Freelance: Linuxing, Networking
                      http://spin.ch/~tpo/freelance
         www.SPIN.ch - Internet Services in Graubuenden/Switzerland
------------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                         ` Internal data (was Re: Speech-enabling approach) T.Pospisek's MailLists
@                          ` Nolan Darilek
                             ` Jason White
                             ` T.Pospisek's MailLists
  0 siblings, 2 replies; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

   Date: Sun, 14 Mar 1999 22:29:07 +0100 (CET)
   From: "T.Pospisek's MailLists" <tpo2@spin.ch>
   Resent-From: blinux-list@redhat.com
   Reply-To: blinux-list@redhat.com

   Take a sighted human being as a harder example of a screen reader which
   does not have access to internal programm structures. Allthough this
   person maybe doesn't use a "conventional screen reading approach" in the
   pure sense, once can indeed argue that it --does-- succeed in correctly
   interpreting the screen.

Agreed. Nice analogy. :)

   Nolan was arguing that you could teach a screenreader how to interpret
   a screen by teaching him about the "patterns to be recognised" if I
   might say so. I think this idea is very good, because allthough the
   implementation might be infinitely difficult, if you succeed in having
   a minimum of generality, that is to have some rudimental
   "average-linux-application-screen-layout-pattern" recognition tools then
   it'll broaden you're choice of useful programms a lot. And my opinion
   is that it's all about choice - we wouldn't be here in blinux if it
   wasn't so.

Thanks, but that wasn't exactly what I was proposing. :)

I'll use my old, tired Pine example to illustrate what I meant. With a
'dumb' screen reader, Pine jabbers constantly. If you have 86 messages
in your inbox and delete one, only the 6 is erased and redrawn, and
you'd hear '5' instead of '85' when deleting a message. While you can
probably guess what the 5 referred to, Emacspeak+Rmail's auditory
interface provides very fluent speech and only reads what you need to
know.

So, if I ever write my screen reader, speech-enabling Pine would solve
the problem. Basically, you'd first have to identify various visual
components of the interface (Status windows, etc.) Once you've done
that, you could write a Python script which, using window objects,
allows you to create various speech windows and modify their
behavior. So, you could do something like (very rough example):

from speech import *
AudioWindow status(0, 24, 80, 24) # Create an AudioWindow object
which speaks any newly-arrived text automatically
MenuWindow msgindex(15, 10, 65, 20) # Create a menu for the message
index and scrolling
. . .

These default objects would have some level of behavior attached to
them. So, the AudioWindow object speaks whenever the window's contents
change, regardless of how much of it is redrawn. Once the objects are
instantiated, they could be modified using Python. So, if you have an
area of the screen which you don't care about unless the text
'Warning!' appears, you could modify a standard method on the object
to only speak the window if that condition is met. Anyhow, it could
certainly be possible to add artificial intelligence features to the
various objects, but that's a little out of my league. :)


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                           ` Nolan Darilek
@                            ` Jason White
                               ` Jason White
                               ` Dave Mielke
                             ` T.Pospisek's MailLists
  1 sibling, 2 replies; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

This kind of approach has been tried by DOS screen reader developers who
have included macro programming in their products. Jaws for DOS and, as I
understand it, IBM screen reader have followed such a strategy. It breaks
down as application complexity increases: it becomes necessary to write
special macros for each context within an application to read the desired
information at an appropriate moment. If the visual presentation is
changed (E.G. the application has been set to use non-default colours)
then the pattern matching that relies on such uniformities ceases to
operate correctly. More importantly, this approach fails whenever the
important distinctions needed for high quality braille or speech output
are not apparent in the visual interface but do exist in the underlying
data (for example the SCOPE and HEADERS attributes in HTML 4.0 tables
which clarify the relations between header and content cells, the labels
of form fields which, thanks to HTML 4.0, can be provided explicitly in
the markup but which a graphical browser would not present visually, the
structure inherent in a marked up document, which might be presented
differently on the screen depending on the configuration of the browser
and the style sheet, the semantics reflected in TeX markup of mathematical
content which can not easily be derived from a graphical presentation,
etc.). While very sophisticated and as yet, to my knowledge, non-existent
artificial intelligence techniques might be able to identify visual cues
in a wide variety of circumstances and provide an efficient auditory or
braille presentation, the costs in terms of research, computing resources,
etc., of developing such a system would make it impractical. The better
approach is to design accessibility into user interfaces themselves so
that appropriate structural and semantic aspects of the content are made
available, automatically and in parallel with the visual interface,
wherefrom an effective and convenient braille or auditory representation
can be easily constructed.

Thus, while I agree that a pattern matching, macro-based screen reader
will work in respect of a certain class of completely text-based
applications, effort would be better directed toward an approach that
modifies the underlying interface in the manner suggested above and which
is thus capable of handling complex user interfaces such as are provided
by the web and by an increasingly sophisticated array of desktop
applications.

Interestingly, the UltraSonix approach initially started by trying to
monitor the data sent to an X server and to derive the auditory
representation therefrom. This strategy was abandoned after it was
realised that low-level information concerning the graphical presentation
was insufficient to provide the basis of a reliable auditory interface. By
analogy, the same argument can be made in relation to the web, to software
designed to handle spreadsheets, marked up documents, etc. It is simply
more cost effective to develop means of exploiting the structural and
semantic distinctions available in the internal representation than to try
to infer them from a visual (ultimately a graphical) presentation, in
which some of the information will inevitably be lost, and could only be
inferred with great difficulty. The problem is similar to that of trying
to derive a properly marked up document from the layout implicit in an
ASCII text file.

If the underlying user interface is capable of supporting both visual and
non-visual representations, then the application will work immediately,
without any need for complex macros or pattern recognition techniques.
This is why it is important to try to influence the development of user
interfaces within the open-source/free software community, and elsewhere
for that matter, and to develop tools which can take advantage of whatever
sources of semantic and structural detail are available from within these
environments.




^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                             ` Jason White
@                              ` Jason White
                               ` Dave Mielke
  1 sibling, 0 replies; 77+ messages in thread
From: Jason White @  UTC (permalink / raw)
  To: blinux-list

As a further note, I understand that UltraSonix is based on a core of
screen reader functions which can be controlled by programmable scripts.
The latter could presumably be customized for particular applications, to
monitor the interface and produce specified feedback when prescribed
changes occur. Since the X Window System supports both text-based and
graphical applications, there would be no need for an additional screen
reader in order to gain the functionality which has been described.

I am not familiar with the details of how UltraSonix works in this regard.
I once had the privilege of being given a demonstration of IBM Screen
Reader for OS/2, in which the screen monitoring and reading functions, the
binding of key sequences to speech commands, etc., were accomplished by
means of a profile stack, with application-specific profiles at the top
and more generic key bindings etc., below.

In any event, my general reservations regarding the value of this general
approach for complex interfaces and data, still apply.



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                           ` Nolan Darilek
                             ` Jason White
@                            ` T.Pospisek's MailLists
  1 sibling, 0 replies; 77+ messages in thread
From: T.Pospisek's MailLists @  UTC (permalink / raw)
  To: blinux-list

On 15 Mar, Nolan Darilek wrote:

>    Nolan was arguing that you could teach a screenreader how to interpret
>    a screen by teaching him about the "patterns to be recognised" if I
>    might say so. I think this idea is very good, because allthough the
>    implementation might be infinitely difficult, if you succeed in having
>    a minimum of generality, that is to have some rudimental
>    "average-linux-application-screen-layout-pattern" recognition tools then
>    it'll broaden you're choice of useful programms a lot. And my opinion
>    is that it's all about choice - we wouldn't be here in blinux if it
>    wasn't so.
> 
> Thanks, but that wasn't exactly what I was proposing. :)
> 
> I'll use my old, tired Pine example to illustrate what I meant. With a
> 'dumb' screen reader, Pine jabbers constantly. If you have 86 messages
> in your inbox and delete one, only the 6 is erased and redrawn, and
> you'd hear '5' instead of '85' when deleting a message. While you can
> probably guess what the 5 referred to, Emacspeak+Rmail's auditory
> interface provides very fluent speech and only reads what you need to
> know.
> 
> So, if I ever write my screen reader, speech-enabling Pine would solve
> the problem. Basically, you'd first have to identify various visual
> components of the interface (Status windows, etc.) Once you've done
> that, you could write a Python script which, using window objects,
> allows you to create various speech windows and modify their
> behavior. So, you could do something like (very rough example):
> 
> from speech import *
> AudioWindow status(0, 24, 80, 24) # Create an AudioWindow object
> which speaks any newly-arrived text automatically
> MenuWindow msgindex(15, 10, 65, 20) # Create a menu for the message
> index and scrolling
> . . .
> 
> These default objects would have some level of behavior attached to
> them. So, the AudioWindow object speaks whenever the window's contents
> change, regardless of how much of it is redrawn. Once the objects are
> instantiated, they could be modified using Python. So, if you have an
> area of the screen which you don't care about unless the text
> 'Warning!' appears, you could modify a standard method on the object
> to only speak the window if that condition is met. Anyhow, it could
> certainly be possible to add artificial intelligence features to the
> various objects, but that's a little out of my league. :)

<g>

Yes, that's exactly what I meant. What you are doing here is defining
patterns with the use of tools (AudioWindow is such a tool). Pattern
recognition is usually associated with AI, but allthough it'd be nice
if the screen reader would be AI and recognise everything by itself,
it's not necessary: if you have some good tools you can "teach" the
screenreader yourself.

*
t

-- 
------------------------------------------------------------------------------
             Tomas Pospisek - Freelance: Linuxing, Networking
                      http://spin.ch/~tpo/freelance
         www.SPIN.ch - Internet Services in Graubuenden/Switzerland
------------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                             ` Jason White
                               ` Jason White
@                              ` Dave Mielke
                                 ` Hans Zoebelein
  1 sibling, 1 reply; 77+ messages in thread
From: Dave Mielke @  UTC (permalink / raw)
  To: blinux-list

On Mon, 15 Mar 1999, Jason White wrote:
>This is why it is important to try to influence the development of user
>interfaces within the open-source/free software community, and elsewhere
>for that matter, and to develop tools which can take advantage of whatever
>sources of semantic and structural detail are available from within these
>environments.

This, of course, is entirely true, but a bit of an understatement. Even if all
such tools were in place, the significant problem still remains that the
majority of the web page designers, who still only care about making those
pages look impressive to sighted people, need to be gently and politely
convinced that they should expend all kinds of effort not only to use the new
facilities, but also to learn how best to use them so that a blind person would
actually benefit from them.

-- 
Dave Mielke           | 856 Grenon Avenue | I believe that the Bible is the
Phone: 1-613-726-0014 | Ottawa, Ontario   | Word of God. Please contact me
EMail: dave@mielke.cc | Canada  K2B 6G3   | if you're concerned about Hell.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: Internal data (was Re: Speech-enabling approach)
                               ` Dave Mielke
@                                ` Hans Zoebelein
  0 siblings, 0 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list; +Cc: charles, Dave Mielke

Lets move that  discussion to blinux-develop.
It was setup once for this purpose and this will help to hold focus
on development stuff.

To subscribe to blinux-develop send an email message to
blinux-develop-request@redhat.com with subject line: subscribe.


--Hans
maintainer blinux-lists



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   John Ramm
@  ` Dave Mielke
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Mielke @  UTC (permalink / raw)
  To: blinux-list

On Mon, 15 Mar 1999, John Ramm wrote:
>Why should I have to shedule time with my 
>wife, or whoever, to install something on my computer? 

Because you love her, and would welcome yet another opportunity to spend time
with her. Because she loves you, and would welcome the opportunity to be
invited into your world just a little bit more.

-- 
Dave Mielke           | 856 Grenon Avenue | I believe that the Bible is the
Phone: 1-613-726-0014 | Ottawa, Ontario   | Word of God. Please contact me
EMail: dave@mielke.cc | Canada  K2B 6G3   | if you're concerned about Hell.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  John Ramm
   ` Dave Mielke
  0 siblings, 1 reply; 77+ messages in thread
From: John Ramm @  UTC (permalink / raw)
  To: blinux-list

On 1999-03-12 blinux-list@redhat.com said:
   >Seems to me a rediculous arguement about installing.  Surely no one
   >on this list, no blind user anyway can disagree with the
   >desirability of being able to install the operating system on their
   >computer independantly?  What could possibly be wrong with that?
   >it is all about accessability and it is the main reason why I have
   >not yet installed LINUX yet.
   >Dale R. Leavens:  Cochrane Ontario

I couldn't agree more! Why should I have to shedule time with my 
wife, or whoever, to install something on my computer? I'm not a 
great fan of Windows, but at least jaws now speaks to you from 
the moment you put the CD in the drive. It is absolutely not 
necessary to get anyone to help you install Jaws. 

Come on guys let's have a system that talks from as early on as 
possible. It is still a source of irritation that I cannot access 
the bios setup on my pc without sighted help, but at least when 
things get going the operating system comes up talking.


John Ramm

Net-Tamer V 1.11 - Registered


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
       ` Jim Rebman
@        ` Nicolas Pitre
  0 siblings, 0 replies; 77+ messages in thread
From: Nicolas Pitre @  UTC (permalink / raw)
  To: blinux-list

On Sun, 14 Mar 1999, Jim Rebman wrote:

> >>    Would it make any sense to have a unified braille/sound speech
> >>    interface (maybe at a higher level)?
> >
> >Not really.  Speech and Braille are completely different concepts, even if
> >their goal is to provide information from the running application.
> >Braille is static and dimentionnal as speech is volatile.  There is no way
> >to merge them without loosing on one side.
> 
> Ok, but don't tell this to the people at Henter-Joyce -- they seem to have
> done a reasonable job of it.  Nobody said that the representations of each
> modality have to be identical in order for each to be effective.

I actually wrote drivers for Henter-Joyce's JFW and the driver API for
braille and speech are completely separate.  The point here is not to try
to merge braille and speech API in order to obtain a uniform device
access.  A screen reader may well make use of both braille and speech and
do a good job with both of them, but driver definitions for braille and
speech have to remain separate.



Nicolas Pitre, B. ing.
nico@cam.org



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
     ` Nicolas Pitre
@      ` Jim Rebman
         ` Nicolas Pitre
  0 siblings, 1 reply; 77+ messages in thread
From: Jim Rebman @  UTC (permalink / raw)
  To: blinux-list

>>    Would it make any sense to have a unified braille/sound speech
>>    interface (maybe at a higher level)?
>
>Not really.  Speech and Braille are completely different concepts, even if
>their goal is to provide information from the running application.
>Braille is static and dimentionnal as speech is volatile.  There is no way
>to merge them without loosing on one side.

Ok, but don't tell this to the people at Henter-Joyce -- they seem to have
done a reasonable job of it.  Nobody said that the representations of each
modality have to be identical in order for each to be effective.

-- Jim

------------------------------
James A. Rebman

Technology - Enhanced Learning Laboratory
University of Colorado, Boulder
College of Engineering and Applied Science

mailto:James.Rebman@Colorado.edu		


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   ` T.Pospisek's MailLists
@    ` Nicolas Pitre
       ` Jim Rebman
  0 siblings, 1 reply; 77+ messages in thread
From: Nicolas Pitre @  UTC (permalink / raw)
  To: T.Pospisek's MailLists; +Cc: blinux-list

On Sun, 14 Mar 1999, T.Pospisek's MailLists wrote:

> b) I don't know/use the braille lines so please don't flame me.
>    Would it make any sense to have a unified braille/sound speech
>    interface (maybe at a higher level)?

Not really.  Speech and Braille are completely different concepts, even if
their goal is to provide information from the running application.
Braille is static and dimentionnal as speech is volatile.  There is no way
to merge them without loosing on one side.


Nicolas Pitre, B. ing.
nico@cam.org



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   Lar Kaufman
@  ` T.Pospisek's MailLists
     ` Nicolas Pitre
  0 siblings, 1 reply; 77+ messages in thread
From: T.Pospisek's MailLists @  UTC (permalink / raw)
  To: blinux-list

a) Before defining a stanard I suggest to have a look at existing API's
   and standarts and evaluate them. Untill now I have seen the following
   ones pop up:

   Name of standart:		Description of API
   Ultrasonix:			http://www.henge.com/~brian/ultralin.html
   Sable:			http://www.bell-labs.com/projects/tts/sable.html
   Aural CSS2 stylesheets :	http://www.w3.org/TR/REC-CSS2
   Emacspeak driver interface:	URL ??
   SSIL:			URL ??
   softspeech:			URL ??
   speakup:			URL ??
   festival:			URL ??
   Java Speech interface:	URL ??
   Recordings for the Blind and Dyslexic's work (see Lar's Email):
   				http://www.rfb.org/??
   
   There are a few issues with those APIs: SSIL - is that a free API?
   Cause if it's not, then it might be illegal to use it for a GPL'ed
   work ?! Emacspeak - somen people didn't feel easy to program in Lisp.
   Is that an issue or can the Emacspeak driver be written in whatever
   language one choses?

b) I don't know/use the braille lines so please don't flame me.
   Would it make any sense to have a unified braille/sound speech
   interface (maybe at a higher level)?

c) What about moving this discussion to blinux-devel?

d) Off-topic: I remeber there was once a discussion on updating the WWW
   pages and creating a FAQ. What's the state of those works?

Opinons?

*
t

------------------------------------------------------------------------------
             Tomas Pospisek - Freelance: Linuxing, Networking
                      http://spin.ch/~tpo/freelance
         www.SPIN.ch - Internet Services in Graubuenden/Switzerland
------------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   Martin McCormick
@  ` Whistler
  0 siblings, 0 replies; 77+ messages in thread
From: Whistler @  UTC (permalink / raw)
  To: blinux-list


Well someone has wrote me and asked how I was to install MsDos with out
help and it was simple maybe we can come up with something just as simple
since we can fit a boot disk on a floppy.  When I wanted to install MSDoss
I would make a boot disk with Asap and all the utilities on the 3 1/2
floppy.  You have mentioned cd's but not everyone has a cd it should be
small enough to fit on either a cd or on a floppy and it should be enough
to get the machine talking.  Even Microsoft is talking about this for NT
5.0 even though I have not been over pleased with stuff they put out.
While Emacspeak might work as an install speech on a cd rom it won't work
for something on a 3 1/2 floppy where Speak-up or Svlpro would come in
handy.
The 7 layer network model doesn't realy work here but it would be nice to
come up with a model for hardware to software for speech.

Ken /whistler

On Sun, 14 Mar 1999, Martin McCormick wrote:

> 	My background is more oriented to hardware and communication
> so I probably think that way because I am used to doing so, but
> accessibility can be thought of in much the same way.  There is a
> seven-layer model used to describe how data communication works in
> which we start with the most primitive part of the system, namely a
> piece of wire or optical fiber.  This is called the Media Access
> layer.  Things get more and more complex as one goes up the layers to
> finally reach the seventh or Application layer which is the ultimate
> goal, namely successfully running a telnet, web, or email session to
> its completion.  It is a modular approach in which each successive
> layer is only responsible for certain things.  It doesn't involve
> itself with things that are above it and it assumes that the layers
> below it are designed correctly so that the information is in tact at
> least to this point.
> 
> 	The problems with doing our own installations, for example,
> are directly due to having difficulty in getting standard output of
> any kind to work during the installation process.  Having it go to a
> software synthesizer or a Braille terminal is actually secondary to
> having it at all.  I am yet another one of those who is more or less
> stuck on high center because I do not really want to have to use a
> person as a screen reader when I am up to my eyeballs in good hardware
> that would tell me everything I need to know.  I have access to a DOS
> computer with screen reader and speech and all I need is to be able to
> redirect the messages during the install process so that I don't kill
> the existing DOS partition as I format the remainder of the disk.
> 
> 	It seems to me that if we solve this problem, we can use the
> same ideas in solving the more complex problem of the ultimate
> challenge which would be coming up with an installation procedure that
> uses the sound card and does not require access via a second computer.
> As I like to say when somebody tries to solve a problem by going off
> in 10 directions at once, "Let's solve one problem at a time and work
> from there."  It at least seems easier if you do that.:-)
> 
> Martin McCormick WB5AGZ  Stillwater, OK 
> OSU Center for Computing and Information Services Data Communications Group
> 
> ---
> Send your message for blinux-list to blinux-list@redhat.com
> Blinux software archive at ftp://leb.net/pub/blinux
> Blinux web page at http://leb.net/blinux
> To unsubscribe send mail to blinux-list-request@redhat.com
> with subject line: unsubscribe
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  Lar Kaufman
   ` T.Pospisek's MailLists
  0 siblings, 1 reply; 77+ messages in thread
From: Lar Kaufman @  UTC (permalink / raw)
  To: Dave Mielke, blinux-list; +Cc: blinux-list-zocki, blinux-list

I agree with Nicolas, Dave would be a good candidate to set initial
goals and coordinate efforts.

Of course, revisions and extensions are to be expected, but an 
initial effort can be mounted now and good progress made, I think.
I'd like to direct whomever does undertake this task to the work 
that Recordings for the Blind and Dyslexic has sponsored in defining
user interface needs by hosting conferences on the topic.  I'll 
track down a contact if need be.  (The first RFB&D conference in
Princeton, NJ is where I encountered T.V. Raman and found out about
Aster...)  I'm pretty sure proceedings are available and if not, 
I have some tapes and printed materials from two conferences.

 -lar
"The sum of all we drive at is that every man may enjoy the same rights that
are granted to others." -- John Locke, 1689, A Letter Concerning Toleration


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  Lar Kaufman
  0 siblings, 0 replies; 77+ messages in thread
From: Lar Kaufman @  UTC (permalink / raw)
  To: blinux-list, blinux-list

What is the minimal requirement for communicating to the installer via
audio?  How much needs to be communicated, and what can be done to 
accomplish it using standard hardware?  That's the first question in my
mind...  I'm reminded of the standard diagnostic beeps programmed in the
original PC BIOS; every "techie" quickly learned all the beep codes for
common hardware problems like missing keyboard, no video card detected, etc.
and could do basic system debugging without even turning the display on.
Assuming, of course, that the PC's speaker was connected to the motherboard
correctly.

Surely it isn't unreasonable to set a standard for an audio communication
interface for OSes, even supporting (for example) a default morse code 
messaging system if no audio device is detected by the kernel.  Since this
capability is pretty much universally available on any terminal, it makes
sense to provide for it at the kernel level.  Even if the only available
communication channel were an unidentifiable video interface, the display
signal itself could "beep" video signals on and off to represent morse 
code visually, providing a fundamental improvement in the ability of the
OS to communicate information to the administrator at startup.  Every UNIX
knows how to signal the ASCII "BEL" character to the terminal, so why not
just make UNIX a bit smarter and a bit less presumptious of how the 
administrator is receiving data?

 -lar
"The sum of all we drive at is that every man may enjoy the same rights that
are granted to others." -- John Locke, 1689, A Letter Concerning Toleration


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  Martin McCormick
   ` Whistler
  0 siblings, 1 reply; 77+ messages in thread
From: Martin McCormick @  UTC (permalink / raw)
  To: blinux-list

	My background is more oriented to hardware and communication
so I probably think that way because I am used to doing so, but
accessibility can be thought of in much the same way.  There is a
seven-layer model used to describe how data communication works in
which we start with the most primitive part of the system, namely a
piece of wire or optical fiber.  This is called the Media Access
layer.  Things get more and more complex as one goes up the layers to
finally reach the seventh or Application layer which is the ultimate
goal, namely successfully running a telnet, web, or email session to
its completion.  It is a modular approach in which each successive
layer is only responsible for certain things.  It doesn't involve
itself with things that are above it and it assumes that the layers
below it are designed correctly so that the information is in tact at
least to this point.

	The problems with doing our own installations, for example,
are directly due to having difficulty in getting standard output of
any kind to work during the installation process.  Having it go to a
software synthesizer or a Braille terminal is actually secondary to
having it at all.  I am yet another one of those who is more or less
stuck on high center because I do not really want to have to use a
person as a screen reader when I am up to my eyeballs in good hardware
that would tell me everything I need to know.  I have access to a DOS
computer with screen reader and speech and all I need is to be able to
redirect the messages during the install process so that I don't kill
the existing DOS partition as I format the remainder of the disk.

	It seems to me that if we solve this problem, we can use the
same ideas in solving the more complex problem of the ultimate
challenge which would be coming up with an installation procedure that
uses the sound card and does not require access via a second computer.
As I like to say when somebody tries to solve a problem by going off
in 10 directions at once, "Let's solve one problem at a time and work
from there."  It at least seems easier if you do that.:-)

Martin McCormick WB5AGZ  Stillwater, OK 
OSU Center for Computing and Information Services Data Communications Group


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
       [not found] ` <dleavens@puc.net>
@    ` Ben Van Poppel
  0 siblings, 0 replies; 77+ messages in thread
From: Ben Van Poppel @  UTC (permalink / raw)
  To: blinux-list

As far as not installing Linux because you can't do it on your own, we 
must draw line at some point between upholding honest and reasonable 
principles and sticking our heads in the sand. Yeah. I'd love to live in 
Utopia too, but it's not going to happen for a while in a 
sighted-oriented world. I got help from a sighted friend to install my 
Redhat system and it meant a lot less stuffing around on my part.

Ben.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  John Ramm
  0 siblings, 0 replies; 77+ messages in thread
From: John Ramm @  UTC (permalink / raw)
  To: blinux-list

Hi Brett (look no duplicated reply)

I am no programmer, although Linux might change that. I just want 
to thank you for the idea of a standard for access development. 
It's a great idea and I hope everyone gets behind it.





John Ramm

Net-Tamer V 1.11 - Registered


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   Dale R. Leavens  My other computer is a Cray.
@  ` Nolan Darilek
       [not found] ` <dleavens@puc.net>
  1 sibling, 0 replies; 77+ messages in thread
From: Nolan Darilek @  UTC (permalink / raw)
  To: blinux-list

   From: "Dale R. Leavens  My other computer is a Cray." <dleavens@puc.net>
   Date: Fri, 12 Mar 1999 17:35:20 +0500

   Seems to me a rediculous arguement about installing.  Surely no one on this
   list, no blind user anyway can disagree with the desirability of being able to
   install the operating system on their computer independantly?  What could
   possibly be wrong with that?  it is all about accessability and it is the main
   reason why I have not yet installed LINUX yet.

Very much agreement. :) Some of us (Well, me at least :) don't have
the benefit of people being around to help whenever something
fails. Granted, I'm not alone, but most of my friends and I are
heavily involved in campus activities and have busy and
constantly-changing schedules. So, while I can create a block of time
to fix a computer problem, I may not be able to find someone to help,
and even so, some people just aren't good or enthusiastic screen
readers. :)

,-----------------------------------------------------------------------------.
>              Nolan Darilek             |                                    <
>           nolan_d@bigfoot.com          |    University of Texas at Austin   <
>         nolan_d@mail.utexas.edu        |      Freshman, Computer Science    <
>              ICQ: 15709478             |      Orientation Advisor, 1999     <
>    http://www.bigfoot.com/~nolan_d/    |                                    <
`-----------------------------------------------------------------------------'
       "Cable is not a luxury, since many areas have poor TV reception."
                    		-- The mayor of Tucson, Arizona, 1989


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  Dale R. Leavens  My other computer is a Cray.
   ` Nolan Darilek
       [not found] ` <dleavens@puc.net>
  0 siblings, 2 replies; 77+ messages in thread
From: Dale R. Leavens  My other computer is a Cray. @  UTC (permalink / raw)
  To: blinux-list

Seems to me a rediculous arguement about installing.  Surely no one on this
list, no blind user anyway can disagree with the desirability of being able to
install the operating system on their computer independantly?  What could
possibly be wrong with that?  it is all about accessability and it is the main
reason why I have not yet installed LINUX yet.


Dale R. Leavens:  Cochrane Ontario
                  Home of the Polar Bear Express!

Net-Tamer V 1.11.2 - Registered


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   Updated Emacspeak packages at Blinux file archive Hans Zoebelein
@  ` Roger Butenuth
  0 siblings, 0 replies; 77+ messages in thread
From: Roger Butenuth @  UTC (permalink / raw)
  To: blinux-list

Hello,

I have uploaded my speech library on the web:

http://www.uni-paderborn.de/cs/heiss/blinux

It consists of one hardware independent part linked to your speaking
application or screenreader and a second hardware dependent part which is
loaded at runtime (with dlopen, as a shared library). This sounds more
complicated as it is, look into speech_test.c for an example how to use
it. 

Currently the following synthesizers are supported:

- Dolphin Apollo and Gemini
- rsynth (really bad quality)
- freephone/mbrola for english, using the soundcard as device
- hadifix/mbrola for german, using the soundcard as device
(- Infovox 700, in a few days)

It should be easy to add any synthesizer that connects to the serial port,
just take dolpin.c as an example.

I would like to discuss this as a starting point for a new standard.

	Roger

--
Roger Butenuth
University of Paderborn
FB 17, Mathematik-Informatik
33095 Paderborn
Tel.: 05251/60-6611
Fax.: 05251/60-6619


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
@  Hans Zoebelein
  0 siblings, 0 replies; 77+ messages in thread
From: Hans Zoebelein @  UTC (permalink / raw)
  To: blinux-list

The ITSY is still a "research project" as they called it. They don't know
if it will ever go into production. At the moment the low drain parts are
still pretty extensive. But since hardware prices are cutting into half
after a year is gone, there is still hope for this nice tiny Linux
machine.

Check out www.Slashdot.org. They once hat an article and pointers to
other web pages about the ITSY 

--Hans

On Thu, 11 Mar 1999, Buddy Brannan wrote:

> Hi Hans,
> 
stuff omitted.
> 
> Now, as for the Tiny. ... I'd like a bit more info on this! I've been
> thinking it's high time I get ahold of a portable notetaker or text reader,
> for of course leaving notes to myself, reading text but not being leashed
> to a computer, and so on. Would this device then use Emacspeak with Mbrola?
> Any projections on when it will be available and how/when/how much for it
> can be obtained, or at least, a URL to get more info on it?
stuff omitted



^ permalink raw reply	[flat|nested] 77+ messages in thread

* speech standard
   Brett Barback
   ` Roger Butenuth
   ` Stein Erik Skotkjerra
@  ` Ann K. Parsons
  2 siblings, 0 replies; 77+ messages in thread
From: Ann K. Parsons @  UTC (permalink / raw)
  To: blinux-list

Hi all,

I think I hear a visionary here.  I'm going to sit back and listen.

Just one quick comment, ICQ is windows based, yet it doesn't follow
even Gates' crude standardization. Because of that, Windows ICQ is
inaccessible to anyone except a person who is willing to work overtime
in order to achieve a working program.  On the other hand, MICQ,
works because it follows some type of standard that is legible to the
OS.

I think Brett has a good point here.

Ann P.

-- 
Ann K. Parsons
email:  akp@eznet.net
web site:  http://home.eznet.net/~akp

"All that is gold does not glitter.
Not all those who wander are lost."
J.R.R. Tolkien


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   Brett Barback
   ` Roger Butenuth
@  ` Stein Erik Skotkjerra
   ` Ann K. Parsons
  2 siblings, 0 replies; 77+ messages in thread
From: Stein Erik Skotkjerra @  UTC (permalink / raw)
  To: blinux-list

Hi Brett,

This is why I wrote my message about standardization, because I feel that
if we could agree about such a standard, all programs would support more
speech/braille devices etc. The problem is that this should have been done
much earlier, since all the existing software must be rewritten to fit
into these standards. 

I do not know very much about speech, as I don't have a speech synth
myself, but I know about braille, and for braille, it should really be
considered makeing such a standard, and as I see it, it wouldn't be too
difficult. If the existing packages are not rewritten, we should at least
considder this for new software.

Regards,
Stein Erik


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: speech standard
   Brett Barback
@  ` Roger Butenuth
   ` Stein Erik Skotkjerra
   ` Ann K. Parsons
  2 siblings, 0 replies; 77+ messages in thread
From: Roger Butenuth @  UTC (permalink / raw)
  To: blinux-list

Hello!

I have written a speech library which can attach to different
synthesizers, hardware or software. It consists of one small part that is
linked to your program and a second part that is loaded at runtime (a
shared library). The second part is synthesizer dependent. Currently I
support the following synthesizers:

 - Usual sound card (with freespeech/mbrola for english, 
   hadifix/mbrola for german)
 - Dolphin series (Apollo, Gemini)
 - Infovox (not complete, but really soon)

I use this to do speech extensions to brltty-2.0. I can upload the source
of the library on the web tomorrow.

Is the source of svlpro available? Not having the source of this package
makes it difficult to decide if the library can be used with this package.

	Roger

--
Roger Butenuth
University of Paderborn
FB 17, Mathematik-Informatik
33095 Paderborn
Tel.: 05251/60-6611
Fax.: 05251/60-6619


^ permalink raw reply	[flat|nested] 77+ messages in thread

* speech standard
@  Brett Barback
   ` Roger Butenuth
                   ` (2 more replies)
  0 siblings, 3 replies; 77+ messages in thread
From: Brett Barback @  UTC (permalink / raw)


after listening closely to all tht is said here on blinux list , having reviewed
the code for the dtpc module, the speakup program and emacs as well as having worked
on svlpro for all these months, i  am realizing something which  is really in need of evaluation.

   What i realize as i do this work is that we have lots of programmers, users, and 
well intentioned people  seeking a single thing...
   unrestricted access to the  linux operating system, equivalent to that of 
our sighted counterparts.   given the nature of linux, it's network readiness 
its reliability and its current text based shell environment, i can see why this
is such an important goal.

   What's missing from this whole picture is one thing....
   A standard.
   There are standards for everything on the net, every facet of computer scinece
has some standard eithe rproposed or implemented and it has given rise to the 
rapid growth and progressive improvement of all these things.
   But , there is no standard for the development of speeh or braille or any other
type of accessibility  for people.
  At least what there is i feel is quite inadequate.


   We can hardly hope to keep up with current trends and changes in the environment 
which is linux and otherwise if we have no standard for the developper to follow.

   The benevolent linus ahs given us a vehicle to not only succeed at something , 
competing alongside sighted users, but has    lef tit to us  to make it work for 
us.
   it is far easier for we who are blind to foresee what the general population is 
looking forward to in the use of this operating system, than it is for they who
have no interest in its accessibility to us,  to see where it will leave us behind.

   One of the things that has allowed linux to grow so rapidly and make its mark 
on the operating system landscape is that a collaborative effort of programmers, users
and analysts have worked  together to build upon each others work being careful 
not to take it in a direction that leads to a dead end.

   I see our many projects, emacspeak, dtpc, etc,etc, and i see no joint effort 
because there is no way for any of us to know what form the next innovation will take.

it isnt enuff to just create a program that  works for this device, and not for that...
  it isnt enuff to hope that in the end this system will become more visually friendly
and consequentially less blind friendly.

    We must resolve that it will happen and where will we be then.
   Each of us scrambling to get the program that works for us to fit into the new picture,
  i will not let that happen.

   we are all pleased today to have the advantage of a keyboard for input and 
we frantically type away at code that will hopefully contort to answer the changes 
that the kernel managers implement.
   What happens when the day comes that the world is commonly using speech recognition to control
the device and we have no coded standard by which to patch that code over to fit our
needs?

      I am developping a model here by which we may establish our standard 
and have everyone agree that that's how we'll do it.

        I currently have in hand a drvier program, the next evolution of svlpro,
that has what i think is the potential for meeting the future changes that 
face the linux community and we in particular.

  I have had people tell me that svlpro is the best program they have ever seen....probably not
but i imagine its an instantaneous response to achieving something not before 
possible.
  My goal with svlpro has since its inception been, to make linux access possible 
for everyone, and every  way necessary.
     my program will have the ability to run multiple devices simultaneously,
sounce card, hardware synth , braille display, youname it...
  When i hear people talking about gui interfaces and fearing that the whole system
will soon become unusable to us because of it, i am not discouraged.
   all that is needed to adapt to any and all circumstances that arise is  is
a standard from which to work.
rather than patch and rewrite every piece of code ever concieved , we could adapt 
that standard and it would reciprocate both directions.

I know i am long winded but this thought has been on my mind for some time, and
my recent work has made it abundantly clear that unless we put our shoulder behind
a single stone, the ensuing landslide of tiny departures from the central issue will
bury us all.

  for now that is the beginning, where will we take it.
and oh yeah, please dont attch this message to any replies since  i dont intened to read it fifty times let alone one more time.
regards, brett barvack


^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, other threads:[~ UTC | newest]

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
 speech standard Hans Zoebelein
 ` Buddy Brannan
   ` Hans Zoebelein
     ` Buddy Brannan
     ` Whistler
     ` Whistler
       ` Whistler
         ` Hans Zoebelein
           ` Whistler
           ` Whistler
             ` Hans Zoebelein
         ` Dave Mielke
           ` Whistler
     ` Nolan Darilek
       ` Hans Zoebelein
         ` Nolan Darilek
           ` Hans Zoebelein
             ` Nolan Darilek
               ` Ann K. Parsons
                 ` Nolan Darilek
                   ` T. V. Raman
               ` Hans Zoebelein
                 ` Nolan Darilek
                   ` Whistler
                 ` Speech-enabling approach (was Re: speech standard) Jason White
                   ` Whistler
                     ` Jason White
                       ` Whistler
                         ` Jason White
                           ` Dave Mielke
                             ` Jason White
                           ` Whistler
                             ` Jason White
                               ` Whistler
                                 ` Hans Zoebelein
                                   ` Whistler
                                     ` Ann K. Parsons
                                   ` speech standard Dave Mielke
                                     ` Nicolas Pitre
                                       ` James R. Van Zandt
                                         ` Nicolas Pitre
                                 ` Speech-enabling approach (was Re: speech standard) Ann K. Parsons
                           ` yvonne
                             ` Whistler
                               ` please think before hitting reply Hans Zoebelein
                             ` Speech-enabling approach (was Re: speech standard) Hans Zoebelein
                       ` Internal data (was Re: Speech-enabling approach) T.Pospisek's MailLists
                         ` Nolan Darilek
                           ` Jason White
                             ` Jason White
                             ` Dave Mielke
                               ` Hans Zoebelein
                           ` T.Pospisek's MailLists
               ` speech standard T. V. Raman
                 ` Interfaces (Was Re: speech standard) Nolan Darilek
 ` speech standard Kevin Forge
   ` Charles McCathieNevile
  -- strict thread matches above, loose matches on Subject: below --
 John Ramm
 ` Dave Mielke
 Lar Kaufman
 ` T.Pospisek's MailLists
   ` Nicolas Pitre
     ` Jim Rebman
       ` Nicolas Pitre
 Lar Kaufman
 Martin McCormick
 ` Whistler
 John Ramm
 Dale R. Leavens  My other computer is a Cray.
 ` Nolan Darilek
     [not found] ` <dleavens@puc.net>
   ` Ben Van Poppel
 Hans Zoebelein
 Updated Emacspeak packages at Blinux file archive Hans Zoebelein
 ` speech standard Roger Butenuth
 Brett Barback
 ` Roger Butenuth
 ` Stein Erik Skotkjerra
 ` Ann K. Parsons

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).