Unless you know sign langauge, it can be very difficult to communicate with a deaf person, especially when face to face and carrying out everyday tasks.
As a deaf person would usually already know sign langauge, what is required is something that can translate sign langauge to the other person. This can be done by having motion sensors on the hands of the deaf person to detect the gestures. This information can then be relayed to the listener’s phone via bluetooth, and then be read out using the phone’s artifical intelligence system. If the person who isn’t deaf wears earphones and listens through them, then the experience will also become more seamless.
Similarly, when the other person speaks, their phone’s microphone can pick up the words and then relay this information to a screen on the deaf person’s motion sensing wearable as text.
Of course, there are potential problems with this solution. One problem can be seen as a disadvantage for the deaf person, as they have to look at their device when the other person is speaking. The translation might also have inaccuracies, especially in noisy environments, and there is also the potential of a time delay when sending the information between devices. However as artifical intelligence gets smarter, there should be fewer inaccuracies.
I think another solution could become more practical in the future, which involves augmented reality. This could allow a deaf person to have a conversation with someone else just as they normally would, except even people who don’t understand sign langauge will be able to fully understand them. This will be due to the various sensors on the AR devices, and also the display of information in real world environments.
As always if you think you have a solution or if you have a problem you would like me to try to solve then please feel free to comment it.