Gadgets – TechCrunch John Biggs

As a hater of all sports I am particularly excited about the imminent replacement of humans with robots in soccer. If this exciting match, the Standard Platform League (SPL) final of the German Open featuring the Nao-Team HTWK vs. Nao Devils, is any indication the future is going to be great.

The robots are all NAO robots by SoftBank and they are all designed according to the requirements of the Standard Platform League. The robots can run (sort of), kick (sort of), and lift themselves up if they fall. The 21 minute video is a bit of a slog and the spectators are definitely not drunk hooligans but darn if it isn’t great to see little robots hitting the turf to grab a ball before it hits the goal.

I, for one, welcome our soccer-playing robot overlords.

Gadgets – TechCrunch Devin Coldewey

Teradyne, a prosaic-sounding but flush company that provides automated testing equipment for industrial applications, has acquired the Danish robotics company MiR for an eye-popping $148 million, with $124 million on the table after meeting performance goals.

MiR, which despite the lowercase “i” stands for Mobile Industrial Robots, does what you might guess. Founded in 2013, the company has grown steadily and had a huge 2017, tripling its revenues to $12 million after its latest robot, the MiR200, received high marks from customers.

MiR’s robots are of the warehouse sort, wheeled little autonomous fellows that can lift and pull pallets, boxes, and so on. They look a bit like the little ones that are always underfoot in Star Wars movies. It’s a natural fit for Teradyne, especially with the latter’s recent purchase of the well known Universal Robotics in a $350 million deal in 2015.

Testing loads of electronics and components may be a dry business, but it’s a booming one, because the companies that test faster ship faster. Any time efficiencies can be made in the process, be it warehouse logistics or assisting expert humans in sensitive procedures, one can be sure a company will be willing to pay for them.

Teradyne also noted (the Robot Report points out) that both companies take a modern approach to robots and how they interact and must be trained by people — the old paradigm of robotics specialists having to carefully program these things doesn’t scale well, and both UR and MiR were forward thinking enough to improve that pain point.

The plan is, of course, to take MiR’s successful technology global, hopefully recreating its success on a larger scale.

“My main focus is to get our mobile robots out to the entire world,” said MiR CSO and founder Niels Jul Jacobsen in the press release announcing the acquisition. “With Teradyne as the owner, we will have strong backing to ensure MiR’s continued growth in the global market.”

Gadgets – TechCrunch Devin Coldewey

Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.

The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.

California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.

The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.

Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.

No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

Gadgets – TechCrunch Devin Coldewey

We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.

It’s a collaboration between the University of Washington and the Allen Institute for AI, and the resulting paper will be presented at CVPR in June.

Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.

And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.

As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.

In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.

They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.

This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.

“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”

That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.

This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”

Gadgets – TechCrunch Devin Coldewey

We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.

It’s a collaboration between the University of Washington and the Allen Institute for AI, and the resulting paper will be presented at CVPR in June.

Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.

And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.

As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.

In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.

They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.

This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.

“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”

That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.

This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”

Gadgets – TechCrunch Devin Coldewey

Logistics may not be the most exciting application of autonomous vehicles, but it’s definitely one of the most important. And the marine shipping industry — one of the oldest industries in the world, you can imagine — is ready for it. Or at least two major Norwegian shipping companies are: they’re building an autonomous shipping venture called Massterly from the ground up.

“Massterly” isn’t just a pun on mass; “Maritime Autonomous Surface Ship” is the term Wilhelmson and Kongsberg coined to describe the self-captaining boats that will ply the seas of tomorrow.

These companies, with “a combined 360 years of experience” as their video put it, are trying to get the jump on the next phase of shipping, starting with creating the world’s first fully electric and autonomous container ship, the Yara Birkeland. It’s a modest vessel by shipping terms — 250 feet long and capable of carrying 120 containers according to the concept — but will be capable of loading, navigating, and unloading without a crew

(One assumes there will be some people on board or nearby to intervene if anything goes wrong, of course. Why else would there be railings up front?)

Each has major radar and lidar units, visible light and IR cameras, satellite connectivity, and so on.

Control centers will be on land, where the ships will be administered much like air traffic, and ships can be taken over for manual intervention if necessary.

At first there will be limited trials, naturally: the Yara Birkeland will stay within 12 nautical miles of the Norwegian coast, shuttling between Larvik, Brevik, and Herøya. It’ll only be going 6 knots — so don’t expect it to make any overnight deliveries.

“As a world-leading maritime nation, Norway has taken a position at the forefront in developing autonomous ships,” said Wilhelmson group CEO Thomas Wilhelmson in a press release. “We take the next step on this journey by establishing infrastructure and services to design and operate vessels, as well as advanced logistics solutions associated with maritime autonomous operations. Massterly will reduce costs at all levels and be applicable to all companies that have a transport need.”

The Yara Birkeland is expected to be seaworthy by 2020, though Massterly should be operating as a company by the end of the year.

Gadgets – TechCrunch Devin Coldewey

The Defense Department’s research wing is serious about putting drones into action, not just one by one but in coordinated swarms. The Offensive Swarm-Enabled Tactics program is kicking off its second “sprint,” a period of solicitation and rapid prototyping of systems based around a central theme. This spring sprint is all about “autonomy.”

The idea is to collect lots of ideas on how new technology, be it sensors, software, or better propeller blades, can enhance the ability of drones to coordinate and operate as a collective.

Specifically, swarms of 50 will need to “isolate an urban objective” within half an hour or so by working together with each other and ground-based robot. That at least is the “operational backdrop” that should guide prospective entrants in their decision whether their tech is applicable.

So a swarm of drones that seed a field faster than a tractor, while practical for farmers, isn’t really something the Pentagon is interested in here. On the other hand, if you can sell that idea as a swarm of drones dropping autonomous sensors on an urban battlefield, they might take a shine to it.

But you could also simply demonstrate how using a compact ground-based lidar system could improve swarm coordination at low cost and without using visible light. Or maybe you’ve designed a midair charging system that lets a swarm perk up flagging units without human intervention.

Those are pretty good ideas, actually — maybe I’ll run them by the program manager, Timothy Chung, when he’s on stage at our Robotics event in Berkeley this May. Chung also oversees the Subterranean Challenge and plenty more at DARPA . He looks like he’s having a good time in the video explaining the ground rules of this new sprint:

You don’t have to actually have 50 drones to take part — there are simulators and other ways of demonstrating value. More information on the program and how to submit your work for consideration can be found at the FBO page.

Gadgets – TechCrunch Matt Burns

Nvidia and Arm today announced a partnership that’s aimed at making it easier for chip makers to incorporate deep learning capabilities into next-generation consumer gadgets, mobile devices and Internet of Things objects. Mostly, thanks to this partnership, artificial intelligence could be coming to doorbell cams or smart speakers soon.

Arm intends to integrate Nvidia’s open-source Deep Learning Accelerator (NVDLA) architecture into its just-announced Project Trillium platform. Nvidia says this should help IoT chip makers incorporate AI into their products.

“Accelerating AI at the edge is critical in enabling Arm’s vision of connecting a trillion IoT devices,” said Rene Haas, EVP, and president of the IP Group, at Arm. “Today we are one step closer to that vision by incorporating NVDLA into the Arm Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”

Announced last month, Arm’s Project Trillium is a series of scalable processors designed for machine learning and neural networks. NVDLA open-source nature allows Arm to offer a suite of developers tools on its new platform. Together, with Arm’s scalable chip platforms and Nvidia’s developer’s tools, the two companies feel they’re offering a solution that could result in billions of IoT, mobile and consumers electronic devices gaining access to deep learning.

Deepu Tallam, VP and GM of Autonomous Machines at Nvidia, explained it best with this analogy: “NVDLA is like providing all the ingredients for somebody to make it a dish including the instructions. With Arm [this partnership] is basically like a microwave dish.”

Gadgets – TechCrunch Devin Coldewey

Nvidia is temporarily stopping testing of its autonomous vehicle platform in response to last week’s fatal collision of a self-driving Uber car with a pedestrian. TechCrunch confirmed this with the company, which offered the following statement:

Ultimately [autonomous vehicles] will be far safer than human drivers, so this important work needs to continue. We are temporarily suspending the testing of our self-driving cars on public roads to learn from the Uber incident. Our global fleet of manually driven data collection vehicles continue to operate.

Reuters first reported the news.

The manually driven vehicles, to be clear, are not self-driving ones with safety drivers, but traditionally controlled vehicles with a full autonomous sensor suite on them to collect data.

Toyota also suspended its autonomous vehicle testing out of concern for its own drivers’ well-being. Uber of course ceased its testing operations at once.

Gadgets – TechCrunch Devin Coldewey

It seems obvious that the way a robot moves would affect how people interact with it, and whether they consider it easy or safe to be near. But what poses and movement types specifically are reassuring or alarming? Disney Research looked into a few of the possibilities of how a robot might approach a simple interaction with a nearby human.

The study had people picking up a baton with a magnet at one end and passing it to a robotic arm, which would automatically move to collect the baton with its own magnet.

But the researchers threw variations into the mix to see how they affected the forces involved, how people moved and what they felt about the interaction. The robot had two types each of three phases: movement into position, grasping the object and removing it from the person’s hand.

For movement, it either started hanging down inertly and sprung up to move into position, or it began already partly raised. The latter condition was found to make people accommodate the robot more, putting the baton into a more natural position for it to grab. Makes sense — when you pass something to a friend, it helps if they already have their hand out.

Grasping was done either quickly or more deliberately. In the first condition the robot’s arm attaches the magnet as soon as it’s in position; in the second, it pushes up against the baton and repositions it for a more natural way to pull out. There wasn’t a big emotional difference here, but opposing forces were much less in the second grasp type, perhaps meaning it was easier.

Once attached, the robot retracted the baton either slowly or more quickly. Humans preferred the former, saying that the latter felt as if the object was being yanked out of their hands.

The results won’t blow anyone’s mind, but they’re an important contribution to the fast-growing field of human-robot interaction. Once there are best practices for this kind of thing, interacting with robots that, say, clear your table at a restaurant or hand workers items in a factory will be operating with the knowledge that they won’t be producing any extra anxiety in nearby humans.

A side effect of all this was that the people in the experiment gradually seemed to learn to predict the robot’s movements and accommodate them — as you might expect. But it’s a good sign that even over a handful of interactions a person can start building a rapport with a machine they’ve never worked with before.