In this interview, Tom Allen, Founder of The AI Journal and Steve Harris, CEO of MindTech look at the future of smart cities, issues with the data that’s currently being used to train AI, and ways you can be better equipped to succeed with AI specifically for applications to smart cities.
Tom: Please can you give the audience a bit of background to what smart cities are and how they’re going to benefit communities, people, businesses?
Steve: We’ve been talking about so-called Smart Cities for decades now, arguably since the creation of Amsterdam’s virtual digital city in 1994.
Smart City technology was supposed to make the places we live in safer, greener, and more sustainable—but, 27 years later, reading the headlines on street crime, traffic accidents or urban pollution, it’s clear we’re not there yet.
The technology already exists to make people feel safer in their cities, and there are companies out there doing everything they can to get that innovation deployed.
Mindtech Global Limited is one of those companies. They recently announced the release of a new Smart City application pack to help companies working in this space create the synthetic data they need to train their AIs. We sat down with Mindtech CEO Steve Harris to find out more about how AI – trained right – could reshape our towns and cities.
Tom: Why a smart city application pack?
Steve: It’s a question of giving AI that exists the tools it needs to succeed.
Intelligence requires good information, and so if we want to see modern Smart Cities, we need to feed the vision systems that enable their smooth running with precisely annotated, high quality and privacy-compliant data in very large quantities—something that’s not always readily available.
That’s where our Smart City application pack comes in. We want to loosen the data bottleneck by providing the people developing next generation Smart City applications with self-serve synthetic data they can use to help their computers see, understand, and predict human-to-human and human-to-world interactions.
Tom: That’s awesome. So, my question is, how exactly does it work?
We’ve created a pack of scenes, assets, and sophisticated automated behaviors relevant to a range of markets, focused on pedestrian and cyclist safety, traffic management, public transport, and crowd management.
Using our Chameleon platform, developers can deploy these “oven-ready” scenarios to train their AI—cutting out the long and laborious task of sourcing, cleaning, and annotating data taken from the real world.
Take crowd safety, for example. With the pack, a scenario can be set up where a child slips from their parent’s hand in a busy street, and is identified by nearby CCTV’s, which track the childs movements until a patrol car in the area stops to recover the child. The data generated by this simulated event—how a child might move in a crowd, where the CCTV needs to be to find them, the optimum position of patrol cars, and thousands of different edge-case scenarios—can be used to inform the infrastructure of a Smart City and help deliver similarly safe results in real life.
Smart City customer applications for our platform are as diverse as the needs of the people who live in them. A cyclist is in a dangerous position next to a truck on the road – the truck’s onboard camera identifies the danger and sounds the alarm. A lone pedestrian is walking down the road at night, potentially being followed by someone. Cameras enabled with an AI vision system can analyse the scene and respond by brightening the streetlights and alerting a nearby patrol car. A passenger on a station platform is behaving erratically. An intelligent vision system automatically spots their unusual behavior and alerts trained staff to come in and help.
Chameleon is endlessly customisable, so AI developers can pick from a range of environments, weather conditions, pedestrian crossings, and traffic scenarios to mimic the chaotic and unpredictable nature of everyday city life.
Tom: What’s the issue with the data currently used in AI training?
Steve: If AI is being deployed in a Smart City, it’s typically been trained on real-world data. That’s ok for run-of-the-mill, easily predictable situations. But Smart Cities should also do a better job at protecting citizens from ‘edge-case scenarios’—those unusual, unpredictable, and, often, tragic events.
The anomalous nature of these events means data on them is scarce, and recreating them to create real-world data would involve substantial risk, that’s if it’s even feasible.
With computer generated scenarios and synthetic data, you can create any edge-case imaginable and run that in an almost unlimited number of ways. In this way, using synthetic data will lead to fundamentally smarter AIs, capable of improving the efficient running of our cities and transport networks – but that are also able to recognise, prevent or mitigate these kinds of nightmare scenarios.
Tom: How does Mindtech’s technology work?
Steve: Another challenge with real-world data is the time and effort it requires from engineers to prepare it. It takes an eye-watering 20 weeks to gather and annotate the base-line 100,000 real-world images necessary to train a visual AI system to refinish even the most basic scene.
The Chameleon platform is ‘no-code’ and ‘self-serve’. Users can log on and create computer-generated worlds from where rich, accurate, fully annotated, and privacy compliant data can be generated in hours, rather than weeks.
We’re already seeing customers using our platform for multiple applications in visual AI: in healthcare to train machines to monitor patients recovering from surgery and in security and safety systems to detect suspicious objects or unusual patterns of behaviour inside shopping centres or sports arenas. This new application pack will speed up the development process and lead to smarter, safer cities around the world.