A project originally developed by Peter Wood and Jim Kyle
For the Deaf Studies Trust in 1987
Although it is obvious that deaf people have difficulty in hearing what is being said in meetings, there has been surprisingly little interest in solving their problems. Most of the time people with a hearing loss are left to struggle with poor acoustics, mumbling speakers and uncontrolled interventions in meetings. Even though by the time people reach the age of 75, there is a likelihood that over 50% will have a very serious hearing loss, there seems to be little interest in addressing the problems. Perhaps it is because most people with a hearing loss are older, that it has not been considered commercially viable to develop any technical means for the participation of people with a hearing loss. Yet older people run companies and senior executives develop a hearing loss; politicians have been known not to hear and the house of Lords has many people of great influence who have a great contribution to make if they can hear and be heard.
For all these people, one-to-one interaction may be possible, but group interaction is achieved with great difficulty.
HI-LINC is a system of software and hardware originally developed in 1987 with support from Leverhulme Trust and IBM, to provide a live note-taking system for deaf and hard-of-hearing people. It is now in active use in around a hundred locations in the UK and is also used in Australia. It allows an operator to type in abbreviated form, the text of what a speaker is saying and the system, after editing, displays a clean text version of this on screen either as a full screen (12 lines) or as a 5 line subtitle. All text displayed is automatically stored and when used with a video adapter which has been developed, all of the display can be recorded on video.
HI-LINC is a simple approach to a major problem - how to give access to the spoken word, to those who cannot hear.
The work when we needed a visual-text system for a conference. We examined Palantype and rejected it, as too expensive, requiring specially trained operators and as often inaccurate.
What we produced was a computer-enhanced note-taking system (Kyle and Wood, 1989) which
The basic principle is very simple: a speaker makes a presentation in a lecture or group setting, an operator types what the person says.
There are a number of different modes in which the system can operate. Demonstrated on video, is a live mode in real time. The display has certain characteristics: it appears word by word, it overwrites so that the text always stays in the same place, rather like a book. It is also the case that the operator can alter text before it is displayed to the user(allowing degree of on-line editing which is unseen by the viewer).
The text appears in the upper window of the operator’s display, before being transferred to the user screen. While it is in the upper window it can be edited, so that the user sees a clean display.
The main problem of course, is how to keep up with the speaker. So we chose to allow the computer to do the work. By using abbreviations, set in a user-defined dictionary, we can cut down on the number of key-presses required to display the information. Examples include: "hl" automatically expanded to HI-LINC, "cds" expanded to Centre for Deaf Studies" and for larger sections of text, "1." produces "The HI-LINC system came into being when we required a text display for hard-of-hearing participants at an international conference in Bristol".
The size of the dictionary is limited only by the operator's definitions or rather by the ability to recall them. Another feature related to this, is the ability of the system to identify speakers - a major problem for hard-of-hearing people in meetings. By using the function keys, the speaker's name can be called up and all the text which follows is attributed to that speaker and can be given a unique colour and background.
In its original form, HI-LINC produced a text-only display of twelve lines but we felt this distanced the participants from the proceedings. So we produced a "magic box" or video adapter which allows us to merge text and video, from any source - camera or videorecorder. Now it is possible to caption live. This means the speaker can be displayed on screen with the running text occurring underneath or a television programme can be recorded and then subtitled.
In both cases we can call on a further facility - prepared text input. This allows us to make a presentation and display the text at the same time. It also allows us the possibility of captioning programmes live. What is important, is that we can break into this prepared text at any point, and add new text then return to the original and so on. Because the operator has a "window" on the prepared text file he/she can judge how closely the speaker is sticking to the text and then display or delete that section of the text.
Finally, all the text which appears is automatically stored (prepared text and asides) for later printing and word processing, and anything which appears on screen, can be videorecorded, for later use.
The system is designed to be easy to operate. It works from simple drop-down menus which control all the functions, and allow us to change display type, add new speakers, or abbreviations and so on.
When we have tested HI-LINC with untrained typists, we have results indicating up to 90% of the meaning is conveyed. The advantage here is the range of results - the system is available to all those with some keyboard skills. Where there are errors in the display, these are ordinary typing mistakes and do not affect meaning in 99% of the cases.
We have now used the system extensively, in conferences, public launches and even speeches by government ministers.
It is intended that the system will be updated to allow voice input and speech to text processing. More details on this will be available.