Gadgets – TechCrunch Devin Coldewey

While hose-toting drones may be a fantasy, hose-powered robo-dragons (or robotic hose-dragons — however you like it) are very much a reality. This strange but potentially useful robot from Japanese researchers could snake into the windows of burning buildings, blasting everything around it with the powerful jets of water it uses to maneuver itself.

Yes, it’s a real thing: created by Tohoku University and Hachinohe College, the DragonFireFighter was presented last month at the International Conference on Robotics and Automation.

It works on the same principle your hose does when you turn it on and it starts flapping around everywhere. Essentially your hose is acting as a simple jet: the force of the water being blasted out pushes the hose itself in the opposite direction. So what if the hose had several nozzles, pointing in several directions, that could be opened and closed independently?

Well, you’d have a robotic hose-dragon. And we do.

The DragonFireFighter has a nozzle-covered sort of “head” and what can only be called a “neck.” The water pressure from the hose is diverted into numerous outlets on both in order to create a stable position that can be adjusted more or less at will.

It requires a bit of human intervention to go forwards, but as you can see several jets are pushing it that direction already, presumably at this point for stability and rigidity purposes. If the operators had a little more line to give it, it seems to me it could zoom out quite a bit further than where it was permitted to in the video.

For now it may be more effective to just direct all that water pressure into the window, but one can certainly imagine situations where something like this would be useful.

DragonFireFighter was also displayed at the International Fire and Disaster Prevention Exhibition in Tokyo.

One last thing. I really have to give credit where credit’s due: I couldn’t possibly outdo IEEE Spectrum’s headline, “Firefighting Robot Snake Flies on Jets of Water.”

Gadgets – TechCrunch Devin Coldewey

A robot’s got to know its limitations. But that doesn’t mean it has to accept them. This one in particular uses tools to expand its capabilities, commandeering nearby items to construct ramps and bridges. It’s satisfying to watch but, of course, also a little worrying.

This research, from Cornell and the University of Pennsylvania, is essentially about making a robot take stock of its surroundings and recognize something it can use to accomplish a task that it knows it can’t do on its own. It’s actually more like a team of robots, since the parts can detach from one another and accomplish things on their own. But you didn’t come here to debate the multiplicity or unity of modular robotic systems! That’s for the folks at the IEEE International Conference on Robotics and Automation, where this paper was presented (and Spectrum got the first look).

SMORES-EP is the robot in play here, and the researchers have given it a specific breadth of knowledge. It knows how to navigate its environment, but also how to inspect it with its little mast-cam and from that inspection derive meaningful data like whether an object can be rolled over, or a gap can be crossed.

It also knows how to interact with certain objects, and what they do; for instance, it can use its built-in magnets to pull open a drawer, and it knows that a ramp can be used to roll up to an object of a given height or lower.

A high-level planning system directs the robots/robot-parts based on knowledge that isn’t critical for any single part to know. For example, given the instruction to find out what’s in a drawer, the planner understands that to accomplish that, the drawer needs to be open; for it to be open, a magnet-bot will have to attach to it from this or that angle, and so on. And if something else is necessary, for example a ramp, it will direct that to be placed as well.

The experiment shown in this video has the robot system demonstrating how this could work in a situation where the robot must accomplish a high-level task using this limited but surprisingly complex body of knowledge.

In the video, the robot is told to check the drawers for certain objects. In the first drawer, the target objects aren’t present, so it must inspect the next one up. But it’s too high — so it needs to get on top of the first drawer, which luckily for the robot is full of books and constitutes a ledge. The planner sees that a ramp block is nearby and orders it to be put in place, and then part of the robot detaches to climb up and open the drawer, while the other part maneuvers into place to check the contents. Target found!

In the next task, it must cross a gap between two desks. Fortunately, someone left the parts of a bridge just lying around. The robot puts the bridge together, places it in position after checking the scene, and sends its forward half rolling towards the goal.

These cases may seem rather staged, but this isn’t about the robot itself and its ability to tell what would make a good bridge. That comes later. The idea is to create systems that logically approach real-world situations based on real-world data and solve them using real-world objects. Being able to construct a bridge from scratch is nice, but unless you know what a bridge is for, when and how it should be applied, where it should be carried and how to get over it, and so on, it’s just a part in search of a whole.

Likewise, many a robot with a perfectly good drawer-pulling hand will have no idea that you need to open a drawer before you can tell what’s in it, or that maybe you should check other drawers if the first doesn’t have what you’re looking for!

Such basic problem-solving is something we take for granted, but nothing can be taken for granted when it comes to robot brains. Even in the experiment described above, the robot failed multiple times for multiple reasons while attempting to accomplish its goals. That’s okay — we all have a little room to improve.

Gadgets – TechCrunch Natasha Lomas

The UK has announced new stop-gap laws for drone operators restricting how high they can fly their craft — 400ft — and prohibiting the devices from being flown within 1km of an airport boundary. The measures will come into effect on July 30.

The government says the new rules are intended to enhance safety, including the safety of passengers of aircraft — given a year-on-year increase in reports of drone incidents involving aircraft. It says there were 93 such incidents reported in the country last year, up from 71 the year before.

And while the UK’s existing Drone Code (which was issued in 2016) already warns operators to restrict drone flights to 400ft — and to stay “well away” from airports and aircraft — those measures are now being baked into law, via an amendment to the 2016 Air Navigation Order (ahead of a full drone bill which was promised for Spring but still hasn’t materialized yet).

UK drone users who flout the new height and airport boundary restrictions face being charged with recklessly or negligently acting in a manner likely to endanger an aircraft or any person in an aircraft — which carries a penalty of up to five years in prison or an unlimited fine, or both.

Additional measures are also being legislated for, as announced last summer — with a requirement for owners of drones weighing 250 grams or more to register with the Civil Aviation Authority and for drone pilots to take an online safety test.

Users who fail to register or sit the competency tests could face fines of up to £1,000. Though those requirements will come into force later, on November 30 2019.

Commenting in a statement, aviation minister Baroness Sugg said: “We are seeing fast growth in the numbers of drones being used, both commercially and for fun. Whilst we want this industry to innovate and grow, we need to protect planes, helicopters and their passengers from the increasing numbers of drones in our skies. These new laws will help ensure drones are used safely and responsibly.”

In a supporting statement, Chris Woodroofe, Gatwick Airport’s COO, added: “We welcome the clarity that today’s announcement provides as it leaves no doubt that anyone flying a drone must stay well away from aircraft, airports and airfields. Drones open up some exciting possibilities but must be used responsibly. These clear regulations, combined with new surveillance technology, will help the police apprehend and prosecute anyone endangering the traveling public.”

Drone maker DJI also welcomed what it couched as a measured approach to regulation. “The Department for Transport’s updates to the regulatory framework strike a sensible balance between protecting public safety and bringing the benefits of drone technology to British businesses and the public at large,” said Christian Struwe, head of public policy Europe at DJI.

“The vast majority of drone pilots fly safely and responsibly, and governments, aviation authorities and drone manufacturers agree we need to work together to ensure all drone pilots know basic safety rules. We are therefore particularly pleased about the Department for Transport’s commitment to accessible online testing as a way of helping drone users to comply with the law.”

Last fall the UK government also announced it plans to legislate to give police more powers to ground drones to prevent unsafe or criminal usage — measures it also said it would include in the forthcoming drone bill.

Gadgets – TechCrunch John Biggs

Cornell researchers have made a little robot that can express its emotions through touch, sending out little spikes when its scared or even getting goosebumps to express delight or excitement. The prototype, a cute smiling creature with rubber skin, is designed to test touch as an I/O system for robotic projects.

The robot mimics the skin of octopi which can turn spiky when threatened.

The researchers, Yuhan Hu, Zhengnan Zhao, Abheek Vimal, and Guy Hoffman, created the robot to experiment with new methods for robot interaction. They compare the skin to “human goosebumps, cats’ neck fur raising, dogs’ back hair, the needles of a porcupine, spiking of a blowfish, or a bird’s ruffled feathers.”

“Research in human-robot interaction shows that a robot’s ability to use nonverbal behavior to communicate affects their potential to be useful to people, and can also have psychological effects. Other reasons include that having a robot use nonverbal behaviors can help make it be perceived as more familiar and less machine-like,” the researchers told IEEE Spectrum.

The skin has multiple configurations and is powered by a computer-controlled elastomer that can inflate and deflate on demand. The goosebumps pop up to match the expression on the robot’s face, allowing humans to better understand what the robot “means” when it raises its little hackles or gets bumpy. I, for one, welcome our bumpy robotic overlords.

Gadgets – TechCrunch Devin Coldewey

Making something fly involves a lot of trade-offs. Bigger stuff can hold more fuel or batteries, but too big and the lift required is too much. Small stuff takes less lift to fly but might not hold a battery with enough energy to do so. Insect-sized drones have had that problem in the past — but now this RoboFly is taking its first flaps into the air… all thanks to the power of lasers.

We’ve seen bug-sized flying bots before, like the RoboBee, but as you can see it has wires attached to it that provide power. Batteries on board would weigh it down too much, so researchers have focused in the past on demonstrating that flight is possible in the first place at that scale.

But what if you could provide power externally without wires? That’s the idea behind the University of Washington’s RoboFly, a sort of spiritual successor to the RoboBee that gets its power from a laser trained on an attached photovoltaic cell.

“It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight,” said co-author of the paper describing the bot, Shyam Gollakota. He’s obviously very concerned with power efficiency — last month he and his colleagues published a way of transmitting video with 99 percent less power than usual.

There’s more than enough power in the laser to drive the robot’s wings; it gets adjusted to the correct voltage by an integrated circuit, and a microcontroller sends that power to the wings depending on what they need to do. Here it goes:

“To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction,” explained lead author Johannes James.

At present the bot just takes off, travels almost no distance and lands — but that’s just to prove the concept of a wirelessly powered robot insect (it isn’t obvious). The next steps are to improve onboard telemetry so it can control itself, and make a steered laser that can follow the little bug’s movements and continuously beam power in its direction.

The team is headed to Australia next week to present the RoboFly at the International Conference on Robotics and Automation in Brisbane.

Gadgets – TechCrunch Devin Coldewey

NASA’s latest mission to Mars, Insight, is set to launch early Saturday morning in pursuit of a number of historic firsts in space travel and planetology. The lander’s instruments will probe the surface of the planet and monitor its seismic activity with unprecedented precision, while a pair of diminutive cubesats riding shotgun will test the viability of tiny spacecraft for interplanetary travel.

Saturday at 4:05 AM Pacific is the first launch opportunity, but if weather forbids it, they’ll just try again soon after — the chances of clouds sticking around all the way until June 8, when the launch window closes, are slim to none.

Insight isn’t just a pretty name they chose; it stands for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, at least after massaging the acronym a bit. Its array of instruments will teach us about the Martian interior, granting us insight (see what they did there?) into the past and present of Mars and the other rocky planets in the solar system, including Earth.

Bruce Banerdt, principal investigator for the mission at NASA’s Jet Propulsion Laboratory, has been pushing for this mission for more than two decades, after practically a lifetime working at the place.

“This is the only job I’ve ever had in my life other than working in the tire shop during the summertime,” he said in a recent NASA podcast. He’s worked on plenty of other missions, of course, but his dedication to this one has clearly paid off. It was actually originally scheduled to launch in 2016, but some trouble with an instrument meant they had to wait until the next launch window — now.

Insight is a lander in the style of Phoenix, about the size of a small car, and shot towards Mars faster than a speeding bullet. The launch is a first in itself: NASA has never launched an interplanetary mission from the West coast, but conditions aligned in this case making California’s Vandenberg air base the best option. It doesn’t even require a gravity assist to get where it’s going.

“Instead of having to go to Florida and using the Earth’s rotation to help slingshot us into orbit… We can blast our way straight out,” Banerdt said in the same podcast. “Plus we get to launch in a way that is gonna be visible to maybe 10 million people in Southern California because this rocket’s gonna go right by LA, right by San Diego. And if people are willing to get up at four o’clock in the morning, they should see a pretty cool light show that day.”

The Atlas V will take it up to orbit and the Centaur will give it its push towards Mars, after which it will cruise for six months or so, arriving late in the Martian afternoon on November 26 (Earth calendar).

Its landing will be as exciting (and terrifying) as Phoenix’s and many others. When it hits the Martian atmosphere, Insight will be going more than 13,000 MPH. It’ll slow down first using the atmosphere itself, losing 90 percent of its velocity as friction against a new, reinforced heat shield. A parachute takes off another 90 percent, but it’ll still be going over 100 MPH, which would make for an uncomfortable landing. So a couple thousand feet up it will transition to landing jets that will let it touch down at a stately 5.4 MPH at the desired location and orientation.

After the dust has settled (literally) and the lander has confirmed everything is in working order, it will deploy its circular, fanlike solar arrays and get to work.

Robot arms and self-hammering robomoles

Insight’s mission is to get into the geology of Mars with more detail and depth than ever before. To that end it is packing gear for three major experiments.

SEIS is a collection of six seismic sensors (making the name a tidy bilingual, bidirectional pun) that will sit on the ground under what looks like a tiny Kingdome and monitor the slightest movement of the ground underneath. Tiny high-frequency vibrations or longer-period oscillations, they should all be detected.

“Seismology is the method that we’ve used to gain almost everything we know, all the basic information about the interior of the Earth, and we also used it back during the Apollo era to understand and to measure sort of the properties of the inside of the moon,” Banerdt said. “And so, we want to apply the same techniques but use the waves that are generated by Mars quakes, by meteorite impacts to probe deep into the interior of Mars all the way down to its core.”

The heat flow and physical properties probe is an interesting one. It will monitor the temperature of the planet below the surface continually for the duration of the mission — but in order to do so, of course, it has to dig its way down. For that purpose it’s installed with what the team calls a “self-hammering mechanical mole.” Pretty self-explanatory, right?

The “mole” is sort of like a hollow, inch-thick, 16-inch-long nail that will use a spring-loaded tungsten block inside itself to drive itself into the rock. It’s estimated that it will take somewhere between 5,000 and 20,000 strikes to get deep enough to escape the daily and seasonal temperature changes at the surface.

Lastly there’s the Rotation and Interior Structure Experiment, which actually doesn’t need a giant nail, a tiny Kingdome, or anything like that. The experiment involves tracking the position of Insight with extreme precision as Mars rotates, using its radio connection with Earth. It can be located to within about four inches, which when you think about it is pretty unbelievable to begin with. The way that position varies may indicate a wobble in the planet’s rotation and consequently shed light on its internal composition. Combined with data from similar experiments in the ’70s and ’90s, it should let planetologists determine how molten the core is.

“In some ways, InSight is like a scientific time machine that will bring back information about the earliest stages of Mars’ formation 4.5 billion years ago,” said Banerdt in an earlier news release. “It will help us learn how rocky bodies form, including Earth, its moon, and even planets in other solar systems.”

In another space first, Insight has a robotic arm that will not just do things like grab rocks to look at, but will grab items from its own inventory and deploy them into its workspace. Its little fingers will grab handles on top of each deployable instrument and grab it just like a human might. Well, maybe a little differently, but the principle is the same. At nearly 8 feet long, it has a bit more reach than the average astronaut.

Cubes riding shotgun

One of the MarCO cubesats.

Insight is definitely the main payload, but it’s not the only one. Launching on the same rocket are two cubesats, known collectively as Mars Cube One, or MarCO. These “briefcase-size” guys will separate from the rocket around the same time as Insight, but take slightly different trajectories. They don’t have the control to adjust their motion and enter an orbit, so they’ll just zoom by Mars right as Insight is landing.

Cubesats launch all the time, though, right? Sure — into Earth orbit. This will be the first attempt to send Cubesats to another planet. If successful there’s no limit to what could be accomplished — assuming you don’t need to pack anything bigger than a breadbox.

The spacecraft aren’t carrying any super-important experiments; there are two in case one fails, and both are only equipped with UHF antennas to send and receive data, and a couple low-resolution visible-light cameras. The experiment here is really the cubesats themselves and this launch technique. If they make it to Mars, they might be able to help send Insight’s signal home, and if they keep operating beyond that, it’s just icing on the cake.

You can follow along with Insight’s launch here; there’s also the traditional anthropomorphized Twitter account. We’ll post a link to the live stream as soon as it goes up.

Gadgets – TechCrunch John Biggs

As a hater of all sports I am particularly excited about the imminent replacement of humans with robots in soccer. If this exciting match, the Standard Platform League (SPL) final of the German Open featuring the Nao-Team HTWK vs. Nao Devils, is any indication the future is going to be great.

The robots are all NAO robots by SoftBank and they are all designed according to the requirements of the Standard Platform League. The robots can run (sort of), kick (sort of), and lift themselves up if they fall. The 21 minute video is a bit of a slog and the spectators are definitely not drunk hooligans but darn if it isn’t great to see little robots hitting the turf to grab a ball before it hits the goal.

I, for one, welcome our soccer-playing robot overlords.

Gadgets – TechCrunch Devin Coldewey

It goes without saying that getting dressed is one of the most critical steps in our daily routine. But long practice has made it second nature, and people suffering from dementia may lose that familiarity, making dressing a difficult and frustrating process. This smart dresser from NYU is meant to help them through the process while reducing the load on overworked caregivers.

It may seem that replacing responsive human help with a robotic dresser is a bit insensitive. But not only are there rarely enough caregivers to help everyone in a timely manner at, say, a nursing care facility, the residents themselves might very well prefer the privacy and independence conferred by such a solution.

“Our goal is to provide assistance for people with dementia to help them age in place more gracefully, while ideally giving the caregiver a break as the person dresses – with the assurance that the system will alert them when the dressing process is completed or prompt them if intervention is needed,” explained the project’s leader, Winslow Burleson, in an NYU news release.

DRESS, as the team calls the device, is essentially a five-drawer dresser with a tablet on top that serves as both display and camera, monitoring and guiding the user through the dressing process.

There are lots of things that can go wrong when you’re putting on your clothes, and really only one way it can go right — shirts go on right side out and trousers forwards, socks on both feet, etc. That simplifies the problem for DRESS, which looks for tags attached to the clothes to make sure they’re on right and in order, making sure someone doesn’t attempt to put on their shoes before their trousers. Lights on each drawer signal the next item of clothing to don.

If there’s any problem — the person can’t figure something out, can’t find the right drawer or gets distracted, for instance — the caregiver is alerted and will come help. But if all goes right, the person will have dressed themselves all on their own, something that might not have been possible before.

DRESS is just a prototype right now, a proof of concept to demonstrate its utility. The team is looking into improving the vision system, standardizing clothing folding and enlarging or otherwise changing the coded tags on each item.

Gadgets – TechCrunch Devin Coldewey

The charming robot at the heart of Disney’s Big Hero 6, Baymax, isn’t exactly realistic, but its puffy bod is an (admittedly aspirational) example of the growing field of soft robotics. And now Disney itself has produced a soft robot arm that seems like it could be a prototype from the movie.

Created by Disney Research roboticists, the arm seems clearly inspired by Baymax, from the overstuffed style and delicate sausage fingers to the internal projector that can show status or information to nearby people.

“Where physical human-robot interaction is expected, robots should be compliant and reactive to avoid human injury and hardware damage,” the researchers write in the paper describing the system. “Our goal is the realization of a robot arm and hand system which can physically interact with humans and gently manipulate objects.”

The mechanical parts of the arm are ordinary enough — it has an elbow and wrist and can move around the way many other robot arms do, using the same servos and such.

But around the joints are what look like big pillows, which the researchers call “force sensing modules.” They’re filled with air and can detect pressure on them. This has the dual effect of protecting the servos from humans and vice versa, while also allowing natural tactile interactions.

“Distributing individual modules over the various links of a robot provides contact force sensing over a large area of the robot and allows for the implementation of spatially aware, engaging physical human-robot interactions,” they write. “The independent sensing areas also allow a human to communicate with the robot or guide its motions through touch.”

Like hugging, as one of the researchers demonstrates:

Presumably in this case the robot (also presuming the rest of the robot) would understand that it is being hugged, and reciprocate or otherwise respond.

The fingers are also soft and filled with air; they’re created in a 3D printer that can lay down both rigid and flexible materials. Pressure sensors within each inflatable finger let the robot know whether, for example, one fingertip is pressing too hard or bearing all the weight, signaling it to adjust its grip.

This is still very much a prototype; the sensors can’t detect the direction of a force yet, and the materials and construction aren’t airtight by design, meaning they have to be continuously pumped full. But it still shows what they want it to show: that a traditional “hard” robot can be retrofitted into a soft one with a bit of ingenuity. We’re still a long way from Baymax, but it’s more science than fiction now.

Gadgets – TechCrunch John Biggs

If you’ve ever painted a room you know that getting every nook and cranny is pretty difficult and Tim Allen help you if you have hardwood or carpet. The tarp alone costs more than the paint. Now, thanks to MIST, your robot can manage the entire job, slapping paint up like a robotic Jackson Pollock.

The robot uses mapping technology and a sort of elevator-like neck to spray up and down walls. The team, which hails from the University of Waterloo, has finished their prototype and it’s called Maverick. The team has experience working at multiple big names including Apple and Facebook. It includes Shubham Aggarwal, Utkarsh Saini, Baraa Hamodi, Hammad Mirza, and Dhruv Sharma.

This is just the beginning for Maverick. The team plans on adding other features that make it easier to use.

“We actually plan on mounting a camera behind the sprayer so that it follows the sprayer up and down, and hence can use image processing to make decisions about whether to actuate the spray or not. We’ve already implemented this logic in software and even have a paint quality detection algorithm. That being said, we haven’t mounted the camera just yet as seen in this video,” the team said.

As you can see below the project involves a platform, arm, and spray system. The robot maps the room and then rolls around, hitting spots that are supposed to be painted and avoiding spots that aren’t. Obviously you’re going to want to tape up some spots but for the most part Maverick will blast your walls with a few layers of paint in the time it would take you to go down to the paint store.

I’ve reached out to the team for more information on their project but until then enjoy their jaunty video below. I, for one, welcome our robotic spraying overlords.