Disaster Risk Reduction Experiments with AIY Vision Kit sunshinebunny, June 23, 2019January 28, 2024 (excerpt from FEU Tech Summit V speech on Risk Reduction) The Google AIY Vision Kit is basically a kit that helps you build a smart camera with the use of a Raspberry Pi Zero and a Vision Bonnet developed by Google. This camera is actually quite a technological breakthrough. It allows us to use and train Artificial Intelligence models without connecting to the Cloud! Instead, inference, or running an image or string through the model to output a result, is done on a device like the Vision Bonnet. This is called Edge AI. It also helps us increase the privacy and speed of the models we make. Plus, we don’t have to pay to use online Artificial Intelligence services when we use the resources provided by the Vision Bonnet and Raspberry Pi. Here’s how it works: First, a picture is taken by the camera. In this example, the picture is a salad. The image is then stored in the Raspberry Pi. After that, the Pi sends the image to the Vision Bonnet. Next, the AI model runs and processes the picture. And finally, the picture is correctly classified as a salad. In this example, the AI model being used is the dish classifier, but the Vision Kit has a lot of other AI models that come with the system image. This technological breakthrough – which doesn’t connect to the cloud to use AI – is also very useful and fun! People, no matter how old, will surely have fun assembling this smart camera and experimenting with it. Yep, one of the awesome things about this kit is that you can experiment with it! The camera’s AI can be retrained and reprogrammed using Python, the Vision Bonnet, and the Raspberry Pi. And since it can be retrained and reprogrammed, it can help out in so many ways! It can be reprogrammed to detect if there’s someone outside the door, if there are vegetables and other ingredients in the refrigerator, if someone left rubbish on the couch, and more! And it can also be useful in disaster risk reduction! As mentioned in our AIRA research, my teammates and I learned from one of our interviews with experts on Disaster Risk Reduction that drones would be helpful in rescue operations, since they can fly over debris and take pictures of disaster prone areas. During a disaster, the AIY Vision camera can be attached to a drone, and it can be used to spot people who need help. Where I am sitting right now, I don’t know what’s going on outside, or even in the next village. And during a disaster, you can say the same for the rescuers in the hypothetical Cherrywood Island. Let’s say Cherrywood Island is a disaster prone place. It has lots of villages and subdivisions in it, lots of buildings and houses, hotels and malls. But one day, an earthquake on the island distracted all the people. It was a really strong hypothetical earthquake. People are screaming their heads off. Houses are burning. Buildings are about to fall. Across the street from the rescue office is a bunch of evacuees. They aren’t injured, but they’re sure upset, and extremely panicky. Beyond the crowds, a bit farther, is the Town Hall, a structure that has cracks but isn’t really falling yet. More evacuees and more crowds of people crowd the area around the office building. Meanwhile, some distance away, the Chocolate Lovers Condominium is on fire. People are trapped inside, and the fire is eating up the building. Obviously, it needs to be attended to before it’s too late. A team of local rescuers has to decide who to help first…but they have to think fast. The people who escaped their buildings aren’t injured badly, and the Town Hall won’t fall yet, but the Chocolate Lovers Condominium is burning, and there are people trapped inside. But then again, the rescuers don’t know that the building is burning any more than we know what’s happening in some random restaurant in Alabang. If they go through the island systematically, attending to whoever and whatever they come across first, they likely wind up evacuating the Town Hall when they should be assisting those in the Chocolate Lovers Condominium. And this is why using the Vision camera to detect disasters and people who need help will be very useful! If the local rescue team of Cherrywood Island sends out drones to scan the area and send them pictures, they’ll be able to strategize. Drones can go through the area faster because they’re small and airborne. When the rescue team scans the photos sent by the Vision camera, they’ll know that the condominium is burning, and realize that they should attend to it first. They’ll also be able to receive updates about the Town Hall and the evacuees, because the camera can continue taking pictures of these. I’ve been experimenting with the Vision Kit to see if its Image Classification model can detect natural disasters, such as floods and fires. Here’s what I have found so far: pictures of burning buildings are usually classified as “stove” or “firescreen,” while floods are often recognized as “bathtub.” Thus, the image classification doesn’t work perfectly, but new AI models can be made to detect disasters and people who need help. I’m still experimenting with this, but I hope that you can also try out the Vision Kit and other technologies to help reduce disasters and save lives. Let’s work together to change the world and make it safer for everyone! Projects aiy visionaiy vision kitdisaster risk reductionedge aiimage classification