"SafeSign" for Road Traffic: Can You Trust AI?
I spy with my little eye: How an artificial intelligence perceives traffic signs.
When it comes to modern vehicles today, driver assistance systems are fairly standard. In the future, however, these smart systems will become even more important. Artificial Intelligence (AI) methods in this regard sometimes make reading traffic signs a challenge. As part of a joint project between ASFINAG, the Johannes Kepler University Linz, and RISC Software Ltd., researchers are focusing on studying these systems in an effort to increase the public’s confidence in these new technologies.
Before driving into a tunnel, if we see “STOP” on an LED display, then we know the situation could be dangerous, even if individual LEDs are not working properly. What if, during heavy snowfall for example, it is difficult to even read the posted speed limit - can the AI still do it? People are usually still able to read traffic signs that are damaged or dirty and react the right way. In order for us to be able to trust AI, the AI systems need to learn and prove they can do this as well.
In order to use road sign recognition systems as intended, society needs to be more accepting as it would be easier to welcome the AI if it could be developed in a trustworthy way. While technically sophisticated methods could create a sense of trustworthiness, this alone is not enough. When it comes to establishing trust, incorporating the law and ethics into the development stages goes a long way, however this is not as easy in practice as one would think. From the road user’s perspective, overall, shouldn’t AI try to avoid as many accidents as possible? Or focus more on avoiding serious accidents at dangerous sections (such as tunnel entrances or on-and-off ramps)? Should the AI control the car on its own based on recognizing traffic signs, or should the driver ultimately be able to take over at any time? Legal experts at the Johannes Kepler University Linz conducted an extensive survey among the population and stakeholders to find out which aspects are important in order to better trust the AI. The findings, including key AI principles (autonomy, harm reduction, fairness, transparency), are to be incorporated into AI development.