A lot of people love the charm and glamour of old cars. For some others new cars are too expensive. But why should you have to do without some modern extras. A traffic sign recognition system is a feature which helps a lot of people in newer cars. We asked ourself, how we could bring this feature to older cars where it can’t be integrated into the cars entertainment system. So, we thought about solutions. For now, we decided to implement our traffic sign recognition AI to an API where you can upload images and get the result from the AI back to your browser. After the working website we perhaps would be able to bring it to other devices like a raspberry pi, mobile phones or even microcontrollers. Our vision combines a raspberry pi with a pi camera, a GPS module and a small display. The signs and your current speed could be displayed on the screen of the raspberry.
MotivationCars give you a lot of freedom. You can travel relaxed, more flexible and if you want to buy groceries you don’t have to take them home in public transport. So, cars are integrated in our life. Mostly for a long time if you decided to buy them. But technology in the last years has changed a lot. Dozens of interesting and inspiring technologies are integrated in a new car today. From sign detection, distance measure and lidar for more self-driving capabilities. We want to bring some of the awesome ideas of the car manufacturers to the people who can’t afford buying a new car.
Our basic conceptOur basic concept is really easy we want to create an API where users can upload their images and they get a response with the signs our AI detected. In this way we can try out our system and test it with different situations. After this we could be evaluate our model and train it more or make it ready for deployment on other devices for local hosting like a raspberry pi.
Our first attemptOur first model we evaluated had a massive problem with the recognition of traffic signs. Nearly every sign would be recognized as a stop sign. The reason could be short training times or to few images to train. One of the bigger problems and the main reason we could not use to many images to train our model was the labelling. In our first model we had to label every image by hand so you could imagine how long it would take to label about a good 1000 -2000 images. Below is the code used to label the images. All it really does is open a labelling program that allows you to select the folder containing the images so you can label them by hand. Once you do it saves the images and a labelled XML file next to it that contains the bounding box.
The Setup for everything is mostly just folderstruktures and installing dependencies. That part is taken care of in the code so you do not have to.
Sometimes installing the necessary dependencies just does not work and you have to just run the code again. Either that or you have to manually install it in the cmd.
the code is much more that all that but in short the model we trained for roughly 3h with 10000 training steps did not perform to the standard we would have liked it to perform So, we trained a entirely new model which was way better in recognizing the correct traffic signs. The only problem we have with our new model is, that we need close up images of our traffic sign. Otherwise, the accuracy will drop down really fast and our model returns wrong signs.
Our second attemptAt our second try we trained with way more images and increased our epochs as well. We got a model which performs really well and can detect many traffic signs. The only problem with the new model is our model input size which is too small. So you need to give the model an image where you mostly see the traffic sign. After testing our AI we decided it would be enough to use it in its current state. We don’t want to upload pictures with multiple traffic signs, pictures where the sign is to small and sometimes the AI gets confused with speed limit signs. But it’s good enough for our needs. To get the data to our AI we created a Python Flask API with a simple HTML/CSS website which is the connector to our API. It would also be possible to run a custom POST request with the image to our server. There is just one problem with our backend. Multiple times while testing images in.png format did not behave like we wanted. So currently we only accept.jpg file formats.
We decided to stick to an API with our AI model but it would be possible to convert the current h5 to an tflite model which could be used on mobile devices or on a raspberry pi. From there you could integrate more features to the device itself. Our vision would contains a GPS receiver to show the current vehicle speed on the raspberry pi display. Likewise we would like to add support for a traffic light detection maybe even with a speaker to inform the driver about the light change or when he’s driving too fast.
All in all the project was a lot of fun and gave us a big insight in training and using an artificial intelligence. We want to thank our lecturers Jan Seyler and Dionysios Satikidis for providing information, supporting and offer the applied AI course at Esslingen university.
Comments
Please log in or sign up to comment.