(Versión en español aquí.)
Why write sign languages? Can’t we just use video?
Of course video is the best way to record a signed language. Of course it is the best way to communicate over long distances using a signed language. So why have we not changed to all audio for spoken languages? Why do we still create and improve on orthographies for spoken languages? It turns out, writing does have some advantages apart from habitual use.
Video recording requires, at a minimum, equipment, good lighting conditions, and a decent amount of storage space or upload time. It also requires a face. Because signing includes non-manual aspects on the face, signed videos cannot be anonymized. These factors mean that ‘video only’ excludes people without access to certain technology and people who want to contribute but do not want to be on camera.
A writing system also allows for print media in a sign language rather than a translation into a spoken language. Again, this is important for offline and non-digital access to resources such as learning materials, dictionaries, and even articles or stories.
In print, signs are typically represented, at best, as line drawings and, at worst, as glosses into a spoken language. Line drawings capture one signer performing a sign one time. They necessarily include some dialect, some angle, and some specific point (or points) in the sign’s movement, no matter how carefully-staged the original photograph may be. Line drawings also require photographs or video stills of any sign that needs to be written (read: every sign in every morphological form), and take time to create. Glosses make no attempt to refer to a sign’s phonological form at all.
For linguistic study of signed languages, it is logical and necessary to be able to refer to an abstract representation of a sign, just as writing systems create abstract representations of words. These are what should appear in syntactic and morphological analyses. Sign languages also deserve interlinear glossing. (And a code for depicting specific phonetic details. More on that below.)
Why should I use SiLOrB instead of another notation system?
SiLOrB is designed to be easy to use and easy to read. Instead of trying to combine the need for typeable Unicode characters and pictographic symbols into a single system, SiLOrB separates transcription from writing. (Note that this is parallel to the use of IPA transcription as well as an orthographic form of a spoken language.)
The transcription system uses only characters found on a standard (Latin script) keyboard, with some nod to iconicity where possible (e.g. directions depicted with ‘<‘, ‘^’, etc.). The writing system uses highly iconic symbols which are arranged to look like a signer facing the reader and performing the sign (see Grid Structure).
The main purpose of the transcription system in the current version is to easily create written signs. SiLOrB codes are entered into Signotate software, which creates the pictographic written form. The current version (2.1) is designed to be used for any sign language at a phonemic level. In the future, transcription codes may provide a way to give specific phonetic details about a sign as well.
Both the transcription and the writing systems aim to be non-linear, creating a cohesive depiction of a sign that mimics its form. Both systems group a sign’s features into the categories of 1) hands, 2) location, 3) movement, and 4) non-manuals. In transcription, a description follows each category in a set order (see Transcription). The writing system combines meaningful components (see symbols pages) into one or two symbols per category, arranged to look like the sign (see Grid Structure).
Signotate software allows for metadata entry, saves signs to create a lexicon, and exports .svg images of written signs, which are easily scaled, edited, and used with other software. The program is designed with customization in mind for future iterations, meaning that users will be able to create their own symbols (for specific types of movement or non-manuals, for example), and create shortcuts for frequently-used components (e.g. handshapes associated with a fingerspelling system).
If you already have data annotated with SignWriting or HamNoSys, automatic conversion to SiLOrB is in the works. The chart below gives a basic comparison of these three systems. Each has advantages and disadvantages to consider depending on the intended use.