Gadgets – TechCrunch Devin Coldewey

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so that the heat shield is facing inwards and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation, and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, but not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that 7 more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last 7 years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Gadgets – TechCrunch Devin Coldewey

Bird strikes on aircraft may be rare, but not so rare that airports shouldn’t take precautions against them. But keeping birds away is a difficult proposition: how do you control the behavior of flocks of dozens or hundreds of birds? Perhaps with a drone that autonomously picks the best path to do so, like this one developed by CalTech researchers.

Right now airports may use manually piloted drones, which are expensive and of course limited by the number of qualified pilots, or trained falcons — which as you might guess is a similarly difficult method to scale.

Soon-Jo Chung at CalTech became interested in the field after seeing the near-disaster in 2009 when US Airways 1549 nearly crashed due to a bird strike but was guided to a comparatively safe landing in the Hudson.

“It made me think that next time might not have such a happy ending,” he said in a CalTech news release. “So I started looking into ways to protect airspace from birds by leveraging my research areas in autonomy and robotics.”

A drone seems like an obvious solution — put it in the air and send those geese packing. But predicting and reliably influencing the behavior of a flock is no simple matter.

“You have to be very careful in how you position your drone. If it’s too far away, it won’t move the flock. And if it gets too close, you risk scattering the flock and making it completely uncontrollable,” Chung said.

The team studied models of how groups of animals move and affect one another, and arrived at their own that described how birds move in response to threats. From this can be derived the flight path a drone should follow that will cause the birds to swing aside in the desired direction but not panic and scatter.

Armed with this new software, drones were deployed in several spaces with instructions to deter birds from entering a given protected area. As you can see below (an excerpt from this video), it seems to have worked:

More experimentation is necessary, of course, to tune the model and get the system to a state that is reliable and works with various sizes of flocks, bird airspeeds, and so on. But it’s not hard to imagine this as a standard system for locking down airspace: a dozen or so drones informed by precision radar could protect quite a large area.

The team’s results are published in IEEE Transactions on Robotics.

Gadgets – TechCrunch Frederic Lardinois

It’s been a week since Lenovo’s Google Assistant-powered smart display went on sale. Slowly but surely, its competitors are launching their versions, too. Today, JBL announced that its $249.95 JBL Link View is now available for pre-order, with an expected ship date of September 3, 2018.

JBL went for a slightly different design than Lenovo (and the upcoming LG WK9), but in terms of functionality, these devices are pretty much the same. The Link View features an 8-inch HD screen; unlike Lenovo’s Smart Display, JBL is not making a larger 10-inch version. It’s got two 10W speakers and the usual support for Bluetooth, as well as Google’s Chromecast protocol.

JBL says the unit is splash proof (IPX4), so you can safely use it to watch YouTube recipe videos in your kitchen. It also offers a 5MP front-facing camera for your video chats and a privacy switch that lets you shut off the camera and microphone.

JBL, Lenovo and LG all announced their Google Assistant smart displays at CES earlier this. Lenovo was the first to actually ship a product, and both the hardware as well as Google’s software received a positive reception. There’s no word on when LG’s WK9 will hit the market.

Gadgets – TechCrunch Devin Coldewey

Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.

Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.

The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.

The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)

In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.

They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.

The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.

What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.

This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.

As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Gadgets – TechCrunch Devin Coldewey

A pair of Canadian students making a simple, inexpensive prosthetic arm have taken home the grand prize at Microsoft’s Imagine Cup, a global startup competition the company holds yearly. SmartArm will receive $85,000, a mentoring session with CEO Satya Nadella, and some other Microsoft goodies. But they were far from the only worthy team from the dozens that came to Redmond to compete.

The Imagine Cup is an event I personally look forward to, because it consists entirely of smart young students, usually engineers and designers themselves (not yet “serial entrepreneurs”) and often aiming to solve real-world problems.

In the semi-finals I attended, I saw a pair of young women from Pakistan looking to reduce stillbirth rates with a new pregnancy monitor, an automated eye-checking device that can be deployed anywhere and used by anyone, and an autonomous monitor for water tanks in drought-stricken areas. When I was their age, I was living at my mom’s house, getting really good at Mario Kart for SNES and working as a preschool teacher.

Even Nadella bowed before their ambitions in his appearance on stage at the final event this morning.

“Last night I was thinking, ‘What advice can I give people who have accomplished so much at such a young age?’ And I said, I should go back to when I was your age and doing great things. Then I realized…I definitely wouldn’t have made these finals.”

That got a laugh, but (with apologies to Nadella) it’s probably true. Students today have unbelievable resources available to them and as many of the teams demonstrated, they’re making excellent use of those resources.

SmartArm in particular combines a clever approach with state of the art tech in a way that’s so simple it’s almost ridiculous.

The issue they saw as needing a new approach is prosthetic arms, which as they pointed out are often either non-functional (think just a plastic arm or simple flexion-based gripper) or highly expensive (a mechanical arm might cost tens of thousands). Why can’t one be both?

Their solution is an extremely interesting and timely one: a relatively simply actuated 3D-printed forearm and hand that has its own vision system built in. A camera built into the palm captures an image of the item the user aims to pick up, and quickly classifies it — an apple, a key ring, a pen — and selects the correct grip for that object.

The user activates the grip by flexing their upper arm muscles, an action that’s detected by a Myo-like muscle sensor (possibly actually a Myo, but I couldn’t tell from the demo). It sends the signal to the arm to activate the hand movement, and the fingers move accordingly.

It’s still extremely limited — you likely can’t twist a doorknob with it, or reliably grip a knife or fork, and so on. But for many everyday tasks it could still be useful. And the idea of putting the camera in the palm is a high-risk, high-reward one. It is of course blocked when you pick up the item, but what does it need to see during that time? You deactivate the grip to put the cup down and the camera is exposed again to watch for the next task.

Bear in mind this is not meant as some kind of serious universal hand replacement. But it provides smart, simple functionality for people who might otherwise have had to use a pincer arm or the like. And according to the team, it should cost less than $100. How that’s possible to do including the arm sensor is unclear to me, but I’m not the one who built a bionic arm so I’m going to defer to them on this. Even if they miss that 50 percent it would still be a huge bargain, honestly.

There’s an optional subscription that would allow the arm to improve itself over time as it learns more about your habits and objects you encounter regularly — this would also conceivably be used to improve other SmartArms as well.

As for how it looks — rather robotic — the team defended it based on their own feedback from amputees: “They’d rather be asked, ‘hey, where did you get that arm?” than ‘what happened to your arm?’ ” But a more realistic-looking set of fingers is also under development.

The team said they were originally looking for venture funding but ended up getting a grant instead; they’ve got interest from a number of Canadian and American institutions already, and winning the Imagine Cup will almost certainly propel them to greater prominence in the field.

My own questions would be on durability, washing, and the kinds of things that really need to be tested in real-world scenarios. What if the camera lens gets dirty or scratched? Will there be color options for people that don’t want to have white “skin” on their arm? What’s the support model? What about insurance?

SmartArm takes the grand prize, but the runners up and some category winners get a bunch of good stuff too. I plan to get in touch with SmartArm and several other teams from the competition to find out more and hear about their progress. I was really quite impressed not just with the engineering prowess but the humanitarianism and thoughtfulness on display this year. Nadella summed it up best:

“One of the things that I always think about is this competition in some sense ups the game, right?” he said at the finals. “People from all over the world are thinking about how do I use technology, how do i learn new concepts, but then more importantly, how do I solve some of these unmet, unarticulated needs? The impact that you all can have is just enormous, the opportunity is enormous. But I also believe there is an amazing sense of responsibility, or a need for responsibility that we all have to collectively exercise given the opportunity we have been given.”

Gadgets – TechCrunch John Biggs

In a truly fascinating exploration into two smart speakers – the Sonos One and the Amazon Echo – BoltVC’s Ben Einstein has found some interesting differences in the way a traditional speaker company and an infrastructure juggernaut look at their flagship devices.

The post is well worth a a full read but the gist is this: Sonos, a very traditional speaker company, has produced a good speaker and modified its current hardware to support smart home features like Alexa and Google Assistant. The Sonos One, notes Einstein, is a speaker first and smart hardware second.

“Digging a bit deeper, we see traditional design and manufacturing processes for pretty much everything. As an example, the speaker grill is a flat sheet of steel that’s stamped, rolled into a rounded square, welded, seams ground smooth, and then powder coated black. While the part does look nice, there’s no innovation going on here,” he writes.

The Amazon Echo, on the other hand, looks like what would happen if an engineer was given an unlimited budget and told to build something that people could talk to. The design decisions are odd and intriguing and it is ultimately less a speaker than a home conversation machine. Plus it is very expensive to make.

Pulling off the sleek speaker grille, there’s a shocking secret here: this is an extruded plastic tube with a secondary rotational drilling operation. In my many years of tearing apart consumer electronics products, I’ve never seen a high-volume plastic part with this kind of process. After some quick math on the production timelines, my guess is there’s a multi-headed drill and a rotational axis to create all those holes. CNC drilling each hole individually would take an extremely long time. If anyone has more insight into how a part like this is made, I’d love to see it! Bottom line: this is another surprisingly expensive part.

Sonos, which has been making a form of smart speaker for fifteen years, is a CE company with cachet. Amazon, on the other hand, sees its devices as a way into living rooms and a delivery system for sales and is fine with licensing its tech before making its own. Therefore to compare the two is a bit disingenuous. Einstein’s thesis that Sonos’ trajectory is troubled by the fact that it depends on linear and closed manufacturing techniques while Amazon spares no expense to make its products is true. But Sonos makes speakers that work together amazingly well. They’ve done this for a decade and a half. If you compare their products – and I have – with competing smart speakers an non-audiophile “dumb” speakers you will find their UI, UX, and sound quality surpass most comers.

Amazon makes things to communicate with Amazon. This is a big difference.

Where Einstein is correct, however, is in his belief that Sonos is at a definite disadvantage. Sonos chases smart technology while Amazon and Google (and Apple, if their HomePod is any indication) lead. That said, there is some value to having a fully-connected set of speakers with add-on smart features vs. having to build an entire ecosystem of speaker products that can take on every aspect of the home theatre.

On the flip side Amazon, Apple, and Google are chasing audio quality while Sonos leads. While we can say that in the future we’ll all be fine with tinny round speakers bleating out Spotify in various corners of our room, there is something to be said for a good set of woofers. Whether this nostalgic love of good sound survives this generation’s tendency to watch and listen to low resolution media is anyone’s bet, but that’s Amazon’s bet to lose.

Ultimately Sonos is strong and fascinating company. An upstart that survived the great CE destruction wrought by Kickstarter and Amazon, it produces some of the best mid-range speakers I’ve used. Amazon makes a nice – almost alien – product, but given that it can be easily copied and stuffed into a hockey puck that probably costs less than the entire bill of materials for the Amazon Echo it’s clear that Amazon’s goal isn’t to make speakers.

Whether the coming Sonos IPO will be successful depends partially on Amazon and Google playing ball with the speaker maker. The rest depends on the quality of product and the dedication of Sonos users. This good will isn’t as valuable as a signed contract with major infrastructure players but Sonos’ good will is far more than Amazon and Google have with their popular but potentially intrusive product lines. Sonos lives in the home while Google and Amazon want to invade it. That is where Sonos wins.

Gadgets – TechCrunch Sarah Wells

With long summer evenings comes the perfect opportunity to dust off your old boxes of circuits and wires and start to build something. If you’re short on inspiration, you might be interested in artist and engineer Dan Macnish’s how-to guide on building an AI-powered doodle camera using a thermal printer, Raspberry pi, a dash of Python and Google’s Quick Draw data set.

“Playing with neural networks for object recognition one day, I wondered if I could take the concept of a Polaroid one step further, and ask the camera to re-interpret the image, printing out a cartoon instead of a faithful photograph.” Macnish wrote on his blog about the project, called Draw This.

To make this work, Macnish drew on Google’s object recognition neural network and the data set created for the game Google Quick, Draw! Tying the two systems together with some python code, Macnish was able to have his creation recognize real images and print out the best corresponding doodle in the Quick, Draw! data set

But since output doodles are limited to the data set, there can be some discrepancy between what the camera “sees” and what it generates for the photo.

“You point and shoot – and out pops a cartoon; the camera’s best interpretation of what it saw,” Macnish writes. “The result is always a surprise. A food selfie of a healthy salad might turn into an enormous hot dog.”

If you want to give this a go for yourself, Macnish has uploaded the instructions and code needed to build this project on GitHub.

Gadgets – TechCrunch Devin Coldewey

A robot’s got to know its limitations. But that doesn’t mean it has to accept them. This one in particular uses tools to expand its capabilities, commandeering nearby items to construct ramps and bridges. It’s satisfying to watch but, of course, also a little worrying.

This research, from Cornell and the University of Pennsylvania, is essentially about making a robot take stock of its surroundings and recognize something it can use to accomplish a task that it knows it can’t do on its own. It’s actually more like a team of robots, since the parts can detach from one another and accomplish things on their own. But you didn’t come here to debate the multiplicity or unity of modular robotic systems! That’s for the folks at the IEEE International Conference on Robotics and Automation, where this paper was presented (and Spectrum got the first look).

SMORES-EP is the robot in play here, and the researchers have given it a specific breadth of knowledge. It knows how to navigate its environment, but also how to inspect it with its little mast-cam and from that inspection derive meaningful data like whether an object can be rolled over, or a gap can be crossed.

It also knows how to interact with certain objects, and what they do; for instance, it can use its built-in magnets to pull open a drawer, and it knows that a ramp can be used to roll up to an object of a given height or lower.

A high-level planning system directs the robots/robot-parts based on knowledge that isn’t critical for any single part to know. For example, given the instruction to find out what’s in a drawer, the planner understands that to accomplish that, the drawer needs to be open; for it to be open, a magnet-bot will have to attach to it from this or that angle, and so on. And if something else is necessary, for example a ramp, it will direct that to be placed as well.

The experiment shown in this video has the robot system demonstrating how this could work in a situation where the robot must accomplish a high-level task using this limited but surprisingly complex body of knowledge.

In the video, the robot is told to check the drawers for certain objects. In the first drawer, the target objects aren’t present, so it must inspect the next one up. But it’s too high — so it needs to get on top of the first drawer, which luckily for the robot is full of books and constitutes a ledge. The planner sees that a ramp block is nearby and orders it to be put in place, and then part of the robot detaches to climb up and open the drawer, while the other part maneuvers into place to check the contents. Target found!

In the next task, it must cross a gap between two desks. Fortunately, someone left the parts of a bridge just lying around. The robot puts the bridge together, places it in position after checking the scene, and sends its forward half rolling towards the goal.

These cases may seem rather staged, but this isn’t about the robot itself and its ability to tell what would make a good bridge. That comes later. The idea is to create systems that logically approach real-world situations based on real-world data and solve them using real-world objects. Being able to construct a bridge from scratch is nice, but unless you know what a bridge is for, when and how it should be applied, where it should be carried and how to get over it, and so on, it’s just a part in search of a whole.

Likewise, many a robot with a perfectly good drawer-pulling hand will have no idea that you need to open a drawer before you can tell what’s in it, or that maybe you should check other drawers if the first doesn’t have what you’re looking for!

Such basic problem-solving is something we take for granted, but nothing can be taken for granted when it comes to robot brains. Even in the experiment described above, the robot failed multiple times for multiple reasons while attempting to accomplish its goals. That’s okay — we all have a little room to improve.

Gadgets – TechCrunch Romain Dillet

French startup Snips has been working on voice assistant technology that respects your privacy. And the company is going to use its own voice assistant for a set of consumer devices. As part of this consumer push, the company is also announcing an initial coin offering.

Yes, it sounds a bit like Snips is playing a game of buzzword bingo. Anyone can currently download the open source Snips SDK and play with it with a Raspberry Pi, a microphone and a speaker. It’s private by design, you can even make it work without any internet connection. Companies can partner with Snips to embed a voice assistant in their own devices too.

But Snips is adding a B2C element to its business. This time, the company is going to compete directly with Amazon Echo and Google Home speakers. You’ll be able to buy the Snips AIR Base and Snips AIR Satellites.

The base will be a good old smart speaker, while satellites will be tiny portable speakers that you can put in all your rooms. The company plans to launch those devices in 18 months.

[gallery ids="1646039,1646040,1646041,1646042,1646043,1646044"]

By default, Snips devices will come with basic skills to control your smart home devices, get the weather, control music, timers, alarms, calendars and reminders. Unlike the Amazon Echo or Google Home, voice commands won’t be sent to Google’s or Amazon’s servers.

Developers will be able to create skills and publish them on a marketplace. That marketplace will run on a new blockchain — the AIR blockchain.

And that’s where the ICO comes along. The marketplace will accept AIR tokens to buy more skills. You’ll also be able to generate training data for voice commands using AIR tokens. To be honest, I’m not sure why good old credit card transactions weren’t enough. But I guess that’s a good way to raise money.

Gadgets – TechCrunch Devin Coldewey

The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much further away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar 6 seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision.

During these 6 seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car, or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing, and saw Herzberg, whom the car had known about in some way for 5 long seconds by then. It struck and killed her.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.

Uber offered the following statement on the report:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.