Roundup If you wanna know what’s been happening in AI this week beyond what we’ve already covered , here’s a quick roundup…
Waymo Open Dataset
Self-driving-car dreamer Waymo is planning to release a data set collected from sensors on its autonomous vehicles to help, er, drive progress in driverless automotive technology, it announced at a computer-vision conference this week.
The Open Dataset will apparently contain about 3,000 driving scenes and 16.7 hours of video, 600,000 images frames, and some 25 million 3D bounding boxes, and 22 million 2D ones, according to Synched.
It’s the first time, in our mind at least, that Waymo has offered to open up any of its self-driving training data, the kind of information and material that’s usually fiercely guarded by machine-learning outfits. High-quality data is key to bumping up the performance of neural networks, and Waymo thus wants developers to use its data set to “[accelerate] the development of machine perception and self-driving technology.”
The data crate isn’t publicly available just yet, but you can sign up to receive an email alert when it’s out. The first release will contain 1,000 driving scenes out of the proposed 3,000, and it is expected to be publicly emitted in July sometime.
AI + video cameras = scary surveillance?
A senior policy analyst working at the American Civil Liberties Union has published a report detailing the concerning rise of AI-powered surveillance.
Machine-learning software has certainly enhanced the capabilities of video camera systems. Previously, the equipment just recorded footage for humans to play back later, with potentially some simple heuristics and algorithms to search for suspicious activities caught on tape. Now, all that material can be analysed and labelled by powerful ( although not infallible ) image-recognition neural networks. These models can detect and flag up specific objects and people, or particular actions and movements, and so on. There are some advantages to the technology, though, according to the report’s author, Jay Stanley, a senior policy analyst working at ACLU’s Speech, Privacy, and Technology Project.
“As with any tool, there will be beneficial uses of this technology – ‘video assistant lifeguards’ at swimming pools, for example, or deployments that protect us all through better environmental monitoring,“ he said.
But it comes a cost. “One of the most worrisome [concerns] is the possibility of widespread chilling effects as we all become highly aware that our actions are being not just recorded and stored, but scrutinized and evaluated on a second-by-second basis with consequences that can include being flagged as suspicious, questioned by the police, or worse.”
The problem is that the cops and feds using these smart cameras aren’t very transparent about how the technology is being used. How are the algorithms trained? How accurate is a particular piece of software? How does it store these images? Does this potentially violate the US constitution?
Domino’s Pizza has partnered up with Nuro, a Silicon Valley robotics startup, to bring hungry netizens its meals on wheels. Boxes of pizza will be slotted into Nuro’s new R2 trundle bots, the design of which hasn’t been revealed yet, and the fleet of R2s will deliver the food autonomously to customers’ doors, where they can retrieve their hot cheesy slices by unlocking the robot with a PIN.
The service will only be available to Domino’s Pizza customers in Houston, Texas, later this year, where Nuro has been testing its robots since March. “We are always looking for new ways to innovate and evolve the delivery experience for our customers,” Kevin Vasconi, Domino’s executive vice president and chief information officer, said this week.
“Nuro’s vehicles are specially designed to optimize the food delivery experience, which makes them a valuable partner in our autonomous vehicle journey. The opportunity to bring our customers the choice of an unmanned delivery experience, and our operators an additional delivery solution during a busy store rush, is an important part of our autonomous vehicle testing.”
The California-based startup also focuses on building fleets of bots to deliver other items, such as groceries and dry cleaning.
Massachusetts wants to ban facial-recog tech
Massachusetts could be the first US state to completely ban facial-recognition technology from official government use.
Lawmakers are mulling a proposed bill that calls for a statewide moratorium on the government use of the tech. A poll conducted by the ACLU revealed that 91 per cent of voters in the state believe that the facial recognition needs to be regulated, 76 per cent believed the government shouldn’t be allowed to monitor civilians using smart video cameras, and 79 per cent of them said they would support the bill.
Although it looks promising for privacy warriors, the results should be taken with a pinch of salt, however, since the poll was only conducted with 503 registered voters from Massachusetts. “Face surveillance technology gives the government unprecedented power to track who we are, where we go, what we do, and who we know,” said Carol Rose, executive director of the ACLU of Massachusetts. “This technology threatens to create a world where people are watched and identified as they attend a protest, congregate at a place of worship, visit a medical provider, and go about their daily lives. It’s time to press pause in Massachusetts.” ®