An Interview on Avatars and automatic translation in Sign Language
I'm talking with the CEO of Charamel about avatars in sign language. Since we are both hearing people and not experts in deaf culture and German Sign Language, I ask you to forgive any inaccuracies.
DO: Welcome to a new podcast on digital accessibility. Today I have another exciting guest with me, namely Mr. Alexander Stricker from the Charamel company. First of all, thank you very much for taking the time to talk to me today about the topic of sign language avatars.
AS: Yes, I'm happy too.
DO: Before we get in, the listeners would like to know something about you: who are you, what are you doing and what does the Charamel company do?
Article Content
- The company Charamel
- The need for sign language content and the challenges
- A sign language avatar
- The integration of deaf people
- Future projects
- More information
- More Talks with Accessibility Specialists
The company Charamel
AS: My name is Alexander Stricker. I am one of the managing directors and founders of Charamel GmbH. We have been around since 1999 and are completely dedicated to the software development of digital virtual personalities. Back then, we really came out of the niche and developed a software solution that made it possible to animate virtual characters live. At that time there was still an actor in the background. Now everything is relatively database driven or AI based.
The topic of sign language has actually been with us since 2003. At that time, a business partner of ours, Ralf raule from jomma GmbH, with whom we also work in the sign language environment, came to us and said: Can't we create a chatbot for deaf people. The question for us was, why does this have to happen? Deaf people can read. We had to be proven wrong and discovered that many deaf people are not necessarily fully fluent in text language. In 2009, things really went in this direction and we said that the technologies are well developed, we can use this basis and perhaps implement something in the area of sign language. This is where the whole topic of research came from and we are heading towards the year 2000 and trying to create translation tools that translate text into sign language.
The need for sign language content and the challenges
DO: yes, super interesting. You have already partially answered the question of why we need sign language. But do you also have an idea of how high the need for sign language is?
AS: yes, from our point of view there is a very high need. On the one hand, there are of course assistance functions that require personal support in translation when dealing with authorities or with certain other advisory topics. This is a topic that we cannot handle with our translation tools because we are at the very beginning. We focus more on the area of digital accessibility and there is now legislation from the EU, a directive, where public bodies are also obliged to provide digital accessible content. And if you look at how many sign language translators or interpreters there are, you can't do it all.
Let me give you an example: There are 16.6 million websites in Germany alone. If you think about the fact that everyone wants to make something translatable into sign language in order to derive a better understanding and make it available to deaf people, then that is not possible at all with the minimal resources. We want to try to make standardized topics translatable, which is a very first step that we want to take in order to make content available in the municipal sector, for example, or in the museum sector, which will then actually be understood. So, the need is certainly very great.
DO: Before we get to your avatar, you could perhaps explain to us that there are now very good algorithms for translating texts from language x into language y. What is the special challenge of automatically translating spoken or written language into sign language?
AS: Well, for one thing, there isn't one sign language, so we have to do contextual translation. This means that we cannot translate texts one-to-one, but that they must also be understandable. That is at least the first big challenge. In principle, interpreters do this too. You think carefully about how the text can actually be made understandable and translatable.
One of the biggest challenges is to present it as simply as possible and then translate it accordingly. If we translate word for word, then the grammar is missing, the meaning is missing, the meaningfulness is missing and therefore the context cannot be understood. On the other hand, there is also the challenge of first translating the text into a translatable, I would say text language, and then making this text language translatable back into animation, i.e. into a three-dimensional language. This is a huge challenge that we have to overcome. On the one hand, the meaning analysis: What is the text meant in the context. On the other hand, to generate certain movement parameters in order to make an avatar animable or to translate a translation into sign language into this three-dimensional visual language.
A sign language avatar
DO: you just said that your company specializes in the development of avatars. What exactly is new about this sign language avatar that you are currently developing?
AS: From our point of view, this is the supreme discipline that we can create, because there are of course an incredible number of details that the deaf people really notice in the imagery. This means that we have to work very carefully to design the animation, otherwise it will not be understandable. That's one topic. And on the other hand, we are also moving very much into the area of almost photorealistic images of human actors. Of course we remain in the three-dimensional environment. There are also avatars that are generated on the basis of video images, that is not the topic that we are pushing, but we are actually working on a three-dimensional, I would say, game-like environment, but where there is really photorealism, i.e. a maximum degree of accuracy and quality will be present in the animation. We have had some positive experiences with this in the past.
The biggest problem in the past has always been that comprehensibility and therefore acceptance failed because the forms of representation were simply too robotic or too cartoon-like. We try to represent this in animation using realistic representation and qualitative possibilities that correspond to the current state of technology.
DO: If I understood correctly, there were special challenges, such as that the mouth image is also extremely important in sign language and that this cannot be faithfully represented by previous avatars.
AS: yes, that's right. The communicated emotion, the mouth image, the facial expressions and also the movement sequences must flow and be precise but also function in a certain synchronicity. This means that certain delays lead to a wrong representation and a wrong comprehensibility and that is why the highest degree of synchrony must also be present, even a hand position, a hand position if it is at the wrong level can lead to a completely different understanding or a completely different meaning . That's exactly why all of these temporal and precise and spatial representations are extremely important and you're absolutely right about the mouth image - that's one of the most important factors, just as facial expressions play a very important role in German sign language.
The integration of deaf people
DO: That was a long-running research project. To what extent were people whose native language was sign language involved in the project?
AS: We tried to make sure from the beginning that we had deaf employees there. And we have a partner, the company jomma from Hamburg, who actively supported us. They have 29 employees, almost all of whom are in the area of deafness and ultimately people who also speak their native language. We had them advise us and also work on our project. The company Jomma brought in the expertise, but we also tried to do as much educational work as possible and involve the community in various evaluation stages in the three-year research project.
This means that we have developed various surveys/surveys, but also demonstrators that were tested by deaf people who also gave us feedback on what was good, what was bad, what was understandable, what was not understandable, to actually reach the highest level Acceptance can also be linked to the presentation of the visualization. But we can only learn if we work together to advance appropriate technologies here too. And that was very important to us, that the deaf community plays an essential role. We know that we haven't done enough, I have to say that quite self-critically, and in principle we need even more expertise in the projects in the future, even more employees, team members who we can in principle involve and so that we can work together on the project This is simply a very important element in the future of sign language digitization, which we have also noticed.
DO: This is also important because these avatars were heavily criticized in earlier versions by the deaf community.
AS: Well, on the one hand, we have had a permanent partner in the past who has brought in this language expertise and deaf interpreters. That means we actually had team members and, to be honest, that wasn't enough because there weren't actually enough people in the research project who could exchange ideas.
On the other hand, we have of course acquired participants from social media who have just said: We would like to find out a little more about the project and we might also like to take part in the surveys. But we have not developed a special procedure.
but we have a follow-up project that we are implementing. It's also about bidirectional communication. That in principle you can also have a dialogue in sign language between hearing and non-hearing people. It's also about sensor recognition/sign recognition, a very challenging topic and we are currently working in this area to actually include an advisory board for ethical moral support, where we appoint deaf people to the advisory board with people who also bring in expertise where even more employees want to involve partners who actually work for a fee and not just on a voluntary basis.
We as hearing people certainly work much more broadly and on a completely different level because there are simply many more people who are hearing. And in this respect, not everyone has to work somewhere that is too demanding and is not paid for it. But here in deaf communication we have noticed that we simply have to build up a lot more, invest a lot more and also perhaps actively acquire team members who also work there for a fee.
It was our first project, the first avasak project that we realized, where we really had to learn a lot. I would say the first three quarters of the year to the year were really about us all working on the same wavelength. This means that there was a lot of knowledge that was imparted by both the deaf and the hearing. We had 3D artists, we had software developers, we had scientists in the project team and we had deaf people in the team and we also had interpreters who were really always completely involved in the processes. Just dealing with deaf sign language interpreters was quite difficult and we had to pick everyone up first. A lot of time went into ensuring that we all spoke the same language and had the same understanding of the state of research and the objectives. We learned a lot there.
And we noticed that far too few native signers, i.e. those who have sign language as their mother tongue, were there and we want to continue to push and optimize this in the future. And that is actually a major goal in the current research project and in other projects.
Maybe one more aspect that I would like to mention: We have noticed that completely new job descriptions, i.e. descriptions of workplaces, are emerging, from which we can draw a lot of potential, I believe, which we also have to build up in the future and where we do want to do a lot more about the topic of inclusion. We can see that there is an incredible amount of expertise in the project that we have now implemented and that is now being continued.
This can result in new job descriptions and jobs.
Future projects
DO: If I understood correctly, the first research project is now completed. I would also be interested to know: How long did the first research project last and what are the next steps you will take?
AS: So the first research project is avasak, which was about the AI-based translation of text into the donor language and we chose the travel environment as the application domain, i.e. travel information and updated information in the travel context. This lasted from 2000 to 2023. In the project, a basic technology was initially researched and developed. There's still a lot of data missing to say that we're going in that direction. We have a translation service, for example like Google Translate or deepl, where you put text in and then a translation field comes out. We're not that far along yet. Here we have only researched and developed the most basic translation mechanisms and the qualitative representation that we need to achieve in order for comprehensibility to be achieved.
Of course, we continue to work with this basic knowledge and we have now implemented various projects, as well as a large project where we are now working in the area of municipal communication. We now have 70 municipalities in the field where we simply say we want digital services also make accessible sign language available and build a translation that is called a communal sign language avatar.
And a new research project, where we are also doing the basic research, is in bigeko, it's about bidirectional sign language communication, where we are trying to implement a system with which hearing and deaf people can be used in the area of an emergency call simulation. I have to say simulation because of course we don't have one Develop a system and test which requirements must be met, such as making an emergency call or reporting an accident in sign language, but can also be recorded by a control center.
How did we come up with this? As part of the abersak project, for example, we had inquiries from individual public transport companies: They say that we have emergency calls here, should be in the subway shafts, in subway stations. And when a deaf person goes there, he presses the button and that's it and nothing else happens. We thought: How can we change that? In this respect, we need to have various sensor-based technologies with which we can transmit signs, which are then translated directly into text or into speech. And conversely, the control center must also have the opportunity to transmit content, put it together in modular form and ask questions, which can then be given in sign language. And that is actually a goal that we are currently working on over the next 3 years.
These are all projects that are still in basic research. We will first analyze and research rudimentary technologies that are possible and we then want to develop an application case with the deaf community in terms of necessities and usability which also makes usability necessary, which in certain cases also has to be necessary.
More information
Do: Then the last question. Where can I find out more about the projects?
AS: Well, on the one hand, we of course offer a lot of information on the websites. But we also offer regular webinars. That means it always depends on what I want to find out about. On the one hand, there are webinars that are of interest to customers, where I can simply find out more about what options there are for translating into sign language and how we deal with them and what next steps we take.
On the other hand, there are also information channels, especially for the deaf community, that we run with the company yomma. These are also webinars or meetings held in sign language, where we talk about the progress of the project and explain certain results that we have achieved. But also to do tests, see how you deal with it in an open discussion or what are the critical points. And I can only recommend following our social media channels or the websites. Information about the relevant events is then regularly provided.
DO: yeah great, I'll definitely link the things in the show notes. Thank you for your time and wish you much success for the rest of the project.
AS: I would like to thank you very much for the opportunity and all I can always say is: We are still at the very beginning and there is still a lot of development work to be done. I mean, if you think back 20 years about translation services, you still haven't been able to translate texts into other languages. In this respect, we are at the very beginning. But I think it's a very important step to make all content digitally available here too
More information about the avatar at Charamel
More Talks with Accessibility Specialists
- Talk with Sophie Johanning on founding an Accessibility Company
- Talk with Meike Seidel on starting an App for blinds to buy in a Supermarket
- An Interview with Flora from SUMM AI on automatic Translation in Easy Reading
- Interview with Dana Pietralla from Paged on starting an accessibility-based company
- Every Feedback is important - an Interview with Ulrike from the Accessibility monitoring center for the state of Bremen
- Barriers for the visually Impaired - an Interview with the Editor Saskia
- How can digital teaching be inclusive?
- User research with blind and visually impaired