Two pertinent facts:

  1. Content in one’s own native language is always going to be more accessible than content in a second language. Furthermore, the more content there is in a language, the better documented it is and the easier it will be to learn.
  2. There are numerous competing written standards for sign languages, and awareness and literacy in any one of these is low among users of these languages; the thought of sign languages being written down at all appears to be somewhat controversial, in fact.

It follows from these facts that there should exist a collaborative encyclopedia in sign languages, but that the current attempts to create these sign language wikis fall short because they rely on writing systems that are not actually (yet) widely used by Deaf people.

This being the issue, the obvious solution as I see it would be for the collaborative encyclopedia to instead use collaborative 3D character animation, to essentially create videos where a cartoon avatar signs the contents of the article to the viewer. Collaborative 3D animation is something that’s already possible to do in a web-app, but the challenges here are to make the animation process comparatively quick and easy for laypeople without a background in 3D animation, by specializing the software for signing; to make the software as accessible as possible for people with a limited proficiency in spoken languages; and to make the animation as computationally inexpensive as possible.

The question is then what this would look like in practice. Giving it some thought, I feel like a more sort of traditional character animation program with keyframes and stuff would not be ideal for this, because even if you have a bunch of keyboard shortcuts it still ends up being a bit clunky and not very accessible.

So what I’ve imagined instead is a sort of visual programming language à la Scratch, with an “IME” feature. I’ve given this idea the working title “SLiki”.

There’d probably be color-coded blocks in groups of two or three for:

  • Handshape
  • Fingerspelling
  • Gaze
  • Eyelids
  • Eyebrows
  • Mouth shape
  • Mouthing
  • Shaking the head
  • Nodding
  • Moving the hand(s) to a specific location
  • Adjusting the pitch/roll/yaw of various joints more precisely
  • Fluttering the fingers
  • Signing more quickly or slowly
  • Reduplication
  • Providing citations
  • Inserting links
  • Adding hatnotes
  • Creating tables
  • Inserting media

I had some more tentative ideas for blocks, too, these being blocks for modifying instructions or their timing in specific ways; and some blocks might need to be split into multiple blocks, or perhaps other blocks might need to be combined. I don’t want to get too attached to one specific way of doing things, given that this is just an idea I’m daydreaming about with zero ability to actually make it happen, and an idea like this needs to be flexible and evolve according to what works best. In any case there needs to be a way to animate rarer hand or mouth shapes, and to control blinking.

Lemma blocks and IME system

The “crown jewel” of this VPL-with-IME approach is the “lemma blocks” as I call them. These blocks, all named after the glosses of signs, and ideally each being accompanied by icons representing the meaning or appearance of the sign, are essentially used as “shorthands” for the instructions necessary to produce the dictionary form of a given sign. One can then “peek inside” the lemma blocks to edit or copy their contents as needed, and one can also create new lemma blocks or save others’ to one’s custom dictionary. Purely visual contributors would probably be able to find their desired lemma block through a tag search system or by sorting lemma blocks into categorical folders.

I’m not quite sure how the IME system would actually work in practice, other than that one would be able to type a gloss of a sign and have a list of lemma blocks matching the gloss appear on screen for one to select from, similarly to a Chinese or Japanese IME. There would presumably be some keyboard shortcuts for all the non-lemma blocks as well, and it should generally be possible to entirely forgo using the mouse when editing, for those who prefer editing in this way.

Navigation for non-contributors

We can think of SLiki articles as having a two-or-three-level hierarchy, where on the first level you have sections, then within these you optionally have subsections, then within the sections or subsections you have individual sentences, and it is the individual sentences that actually contain the code for the article. Sections, subsections, and sentences alike should have written titles (or captions, in the case of sentences), and these would perhaps by default be the words of Lorem Ipsum. Titles/captions would help people navigate through articles, similarly to the chapters and transcripts of YouTube videos. These titles/captions will display to the side of the video player when viewing an article, and one can hover one’s mouse over them to see a preview of that part of the article. Articles might also have an “appendix” section below the video player, containing the tables, media, links, citations, and hatnotes featured in the article.

For any spoken-language text that must appear on screen while editing or navigating SLiki, one should be able to see the corresponding sign for a given word by e.g. hovering one’s mouse over it, and a small animation will play in a speech bubble of sorts above the word. The ideal is to minimize the amount of spoken-language text on any given page, in any case, and to also display icons next to bits of text if possible. One should also be able to set one’s own UI language preferences, perhaps including the display of SignWriting, and there should be tutorials to help new users and contributors learn the ropes.

The article search feature works either by typing in the corresponding article title (i.e. gloss), or through a tag system, much like the lemma blocks. SLiki articles would set their thumbnails to their first image by default, but it should be possible to set a custom thumbnail. Hovering one’s mouse over an article thumbnail will play a preview of that article.

Individual articles may be downloaded as editable files for an offline editor, and from there converted into video files. Because the articles are not video files, one may actually change or customize the model used for the character right from the video player.

More considerations

Another consideration might be whether SLiki would be federated or centralized, and whether the VPL-with-IME idea might have applications outside of just collaboratively editing articles. I can imagine two instances federating lemma blocks with each other, for instance, and I can imagine something like the sign language VPL-with-IME being used for, say, social media posts, in which case it would have a similar appeal of anonymity beneath a cute avatar as the sign language communities of VRchat — but without the limitations on handshape and non-manual markers presented by VRchat, and without the cost of entry of a VR set, nor the barrier of entry of traditional 3D animation.

And of course, I'm not Deaf myself, nor a software engineer or anything like that, so I cannot even remotely guarantee that any of this would actually work or be easy to make in practice, nor can I confirm that it would be well-liked by Deaf people. This is more just meant to be food for thought.

  • JoeByeThen [he/him, they/them]
    ·
    2 months ago

    Sounds cool, might wanna check you're not re-inventing anything. Found this site when I was looking for Sign Language Phonemes. Seems like they have a lot of resources.

    https://software.sil.org/software-products/

    • Erika3sis [she/her, xe/xem]
      hexagon
      ·
      2 months ago

      I have honestly tried really hard to find anything similar, because it seems on some level like such an obvious idea that it would almost be preposterous that no-one would've already created such a thing by now — but alas, I've had no luck finding the real-life SLiki thus far. It's like all the pieces are already there but nobody's put them together yet. You can animate 3D characters online. You can animate collaboratively in 3D online. You can animate in 2D using a VPL online. You can animate in 3D using a VPL offline. There are sign language wikis already out there which use SignWriting. There are 3D animations of cartoon characters using sign language, and as mentioned, now entire classes in sign languages take place online using cartoon avatars in VRchat. And there are fully captioned live-action educational videos in sign language. But to the best of my abilities to search the World Wide Web, it really does not seem like there is yet an online collaborative VPL for 3D character animation specialized for usage in sign-language educational content.

      ...Which I guess figures. I'm sure that of all the compromises that Deaf people need to make with an audist society in their daily lives, that compromising between using non-collaborative educational resources in their native languages, and using collaborative educational resources in their second languages, is just not that pressing an issue that it would generate demand for SLiki to exist without it being someone's passion project.