Gadgets – TechCrunch Matthew Lynley

And there we have it: Bird, one of the emerging massively hyped Scooter startups, has roped in its next pile of funding by picking up another $300 million in a round led by Sequoia Capital.

The company announced the long-anticipated round this morning, with Sequoia’s Roelof Botha joining the company’s board of directors. This is the second round of funding that Bird has raised over the span of a few months, sending it from a reported $1 billion valuation in May to a $2 billion valuation by the end of June. In March, the company had a $300 million valuation, but the Scooter hype train has officially hit a pretty impressive inflection point as investors pile on to get money into what many consider to be the next iteration of resolving transportation at an even more granular level than cars or bikes. New investors in the round include Accel, B Capital, CRV, Sound Ventures, Greycroft and e.ventures; previous investors Craft Ventures, Index Ventures, Valor, Goldcrest, Tusk Ventures and Upfront Ventures are also in the round. (So, basically everyone else who isn’t in competitor Lime.)

Scooter mania has captured the hearts of Silicon Valley and investors in general — including Paige Craig, who actually jumped from VC to join Bird as its VP of business — with a large amount of capital flowing into the area about as quickly as it possibly can. These sort of revolving-door fundraising processes are not entirely uncommon, especially for very hot areas of investment, though the scooter scene has exploded considerably faster than most. Bird’s round comes amid reports of a mega-round for Lime, one of its competitors, with the company reportedly raising another $250 million led by GV, and Skip also raising $25 million.

“We have met with over 20 companies focused on the last-mile problem over the years and feel this is a multi-billion dollar opportunity that can have a big impact in the world,” CRV’s Saar Gur, who did the deal for the firm, said. “We have a ton of conviction that this team has original product thought (they created the space) and the execution chops to build something extremely valuable here. And we have been long-term focused, not short-term focused, in making the investment. The ‘hype’ in our decision (the non-zero answer) is that Bird has built the best product in the market and while we kept meeting with more startups wanting to invest in the space — we kept coming back to Bird as the best company. So in that sense, the hype from consumers is real and was a part of the decision. On unit economics: We view the first product as an MVP (as the company is less than a year old) — and while the unit economics are encouraging, they played a part of the investment decision but we know it is not even the first inning in this market.”

There’s certainly an argument to be made for Bird, whose scooters you’ll see pretty much all over the place in cities like Los Angeles. For trips that are just a few miles down wide roads or sidewalks, where you aren’t likely to run into anyone, a quick scan of a code and a hop on a Bird may be worth the few bucks in order to save a few minutes crossing those considerably long blocks. Users can grab a bird that they see and start going right away if they are running late, and it does potentially alleviate the pressure of calling a car for short distances in traffic, where a scooter may actually make more sense physically to get from point A to point B than a car.

There are some considerable hurdles going forward, both theoretical and in effect. In San Francisco, though just a small slice of the United States metropolitan area population, the company is facing significant pushback from the local government, and scooters for the time being have been kicked off the sidewalks. There’s also the looming shadow of what may happen regarding changes in tariffs, though Gur said that it likely wouldn’t be an issue and “the unit economics appear to be viable even if tariffs were to be added to the cost of the scooters.” (Xiaomi is one of the suppliers for Bird, for example.)

Gadgets – TechCrunch Devin Coldewey

The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much further away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar 6 seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision.

During these 6 seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car, or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing, and saw Herzberg, whom the car had known about in some way for 5 long seconds by then. It struck and killed her.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.

Uber offered the following statement on the report:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

Gadgets – TechCrunch Devin Coldewey

Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.

The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.

California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.

The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.

Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.

No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

Gadgets – TechCrunch Devin Coldewey

When Luminar came out of stealth last year with its built-from-scratch lidar system, it seemed to beat established players like Velodyne at their own game — but at great expense and with no capability to build at scale. After the tech proved itself on the road, however, Luminar got to work making its device better, cheaper, and able to be assembled in minutes rather than hours.

“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”

Lest you think the company has sacrificed quality for quantity, be it known that the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).

The secret — to just about the whole operation, really — is the sensor. Luminar’s lidar systems, like all others, fire out a beam of light and essentially time its return. That means you need a photosensitive surface that can discern just a handful of photons.

Most photosensors, like those found in digital cameras and in other lidar systems, use a silicon-based photodetector. Silicon is well-understood, cheap, and the fabrication processes are mature.

Luminar, however, decided to start from the ground up with its system, using an alloy called indium gallium arsenide, or InGaAs. An InGaAs-based photodetector works at a different frequency of light (1,550nm rather than ~900) and is far more efficient at capturing it. (Some physics here.)

The more light you’ve got, the better your sensor — that’s usually the rule. And so it is here; Luminar’s InGaAs sensor and a single laser emitter produced images tangibly superior to devices of a similar size and power draw, but with fewer moving parts.

The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by making a sensor a fraction of the size of a silicon one.

Last year Luminar was working with a company called Black Forest Engingeering to design these chips, and finding their paths inextricably linked (unless someone in the office wanted to volunteer to build InGaAs ASICs), Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.

By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.

“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”

“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”

Sure — all it took was a bunch of geniuses, five years, and a seven-figure budget (and I’d be surprised if the $36M in seed funding was all they had to work with). But let’s not quibble.

Quality inspection time in the clean room.

It’s all being done with a view to the long road ahead, though. Last year the company demonstrated that its systems not only worked, but worked well, even if there were only a few dozen of them at first. And they could get away with it, since as Russell put it, “What everyone has been building out so far has been essentially an autonomous test fleet. But now everyone is looking into building an actual, solidified hardware platform that can scale to real world deployment.”

Some companies took a leap of faith, like Toyota and a couple other unnamed companies, even though it might have meant temporary setbacks.

“It’s a very high barrier to entry, but also a very high barrier to exit,” Russell pointed out. “Some of our partners, they’ve had to throw out tens of thousands of miles of data and redo a huge portion of their software stack to move over to our sensor. But they knew they had to do it eventually. It’s like ripping off the band-aid.”

We’ll soon see how the industry progresses — with steady improvement but also intense anxiety and scrutiny following the fatal crash of an Uber autonomous car, it’s difficult to speculate on the near future. But Luminar seems to be looking further down the road.

Gadgets – TechCrunch Devin Coldewey

Logistics may not be the most exciting application of autonomous vehicles, but it’s definitely one of the most important. And the marine shipping industry — one of the oldest industries in the world, you can imagine — is ready for it. Or at least two major Norwegian shipping companies are: they’re building an autonomous shipping venture called Massterly from the ground up.

“Massterly” isn’t just a pun on mass; “Maritime Autonomous Surface Ship” is the term Wilhelmson and Kongsberg coined to describe the self-captaining boats that will ply the seas of tomorrow.

These companies, with “a combined 360 years of experience” as their video put it, are trying to get the jump on the next phase of shipping, starting with creating the world’s first fully electric and autonomous container ship, the Yara Birkeland. It’s a modest vessel by shipping terms — 250 feet long and capable of carrying 120 containers according to the concept — but will be capable of loading, navigating, and unloading without a crew

(One assumes there will be some people on board or nearby to intervene if anything goes wrong, of course. Why else would there be railings up front?)

Each has major radar and lidar units, visible light and IR cameras, satellite connectivity, and so on.

Control centers will be on land, where the ships will be administered much like air traffic, and ships can be taken over for manual intervention if necessary.

At first there will be limited trials, naturally: the Yara Birkeland will stay within 12 nautical miles of the Norwegian coast, shuttling between Larvik, Brevik, and Herøya. It’ll only be going 6 knots — so don’t expect it to make any overnight deliveries.

“As a world-leading maritime nation, Norway has taken a position at the forefront in developing autonomous ships,” said Wilhelmson group CEO Thomas Wilhelmson in a press release. “We take the next step on this journey by establishing infrastructure and services to design and operate vessels, as well as advanced logistics solutions associated with maritime autonomous operations. Massterly will reduce costs at all levels and be applicable to all companies that have a transport need.”

The Yara Birkeland is expected to be seaworthy by 2020, though Massterly should be operating as a company by the end of the year.

Gadgets – TechCrunch Devin Coldewey

Dozens of high-tech phone smugglers have been apprehended by Chinese police, who twigged to the scheme to send refurbished iPhones into the country from Hong Kong via drone — but not the way you might think.

China’s Legal Daily reported the news (and Reuters noted shortly after) following a police press conference; it’s apparently the first cross-border drone-based smuggling case, so likely of considerable interest.

Although the methods used by the smugglers aren’t described, a picture emerges from the details. Critically, in addition to the drones themselves, which look like DJI models with dark coverings, police collected some long wires — more than 600 feet long.

Small packages of 10 or so phones were sent one at a time, and it only took “seconds” to get them over the border. That pretty much rules out flying the drone up and over the border repeatedly — leaving aside that landing a drone in pitch darkness on the other side of a border fence (or across a body of water) would be difficult to do once or twice, let alone dozens of times, the method is also inefficient and and risky.

But really, the phones only need to clear the border obstacle. So here’s what you do:

Send the drone over once with all cable attached. Confederates on the other side attach the cable to a fixed point, say 10 or 15 feet off the ground. Drone flies back unraveling the cable, and lands some distance onto the Hong Kong side. Smugglers attach a package of 10 phones to the cable with a carabiner, and the drone flies straight up. When the cable reaches a certain tension, the package slides down the cable, clearing the fence. The drone descends, and you repeat.

I’ve created a highly professional diagram to illustrate this technique (feel free to reuse):

It’s not 100 percent to scale. The far side might have to be high enough that the cable doesn’t rest on the fence, if there is one, or not to drag in the water if that’s the case. Not sure about that part.

Anyway, it’s quite smart. You get horizontal transport basically for free, and the drone only has to do what it does best: go straight up. Two wires were found, and the police said up to 15,000 phones might be sent across in a night. Assuming 10 phones per trip, and say 20 seconds per flight, that works out to 1,800 phones per hour per drone, which sounds about right. Probably this kind of thing is underway at more than a few places around the world.

Gadgets – TechCrunch Devin Coldewey

Companies and students who want to test an autonomous vehicle at the University of Michigan have the excellent Mcity simulated urban environment. But if you wanted to test a drone, your options were extremely limited — think “at night in a deserted lecture hall.” Not anymore: the school has just opened its M-Air facility, essentially a giant netted playground for UAVs and their humans.

It may not look like much to the untrained eye, and certainly enclosing a space with a net is considerably less labor-intensive than building an entire fake town. But the benefits are undeniable.

Excited students at a school like U-M must frequently come up with ideas for drone control systems, autonomous delivery mechanisms, new stabilization algorithms and so on. Testing them isn’t nearly as simple, though: finding a safe, controlled space and time to do it, getting the necessary approvals and, of course, containing the fallout if anything goes wrong — tasks like these could easily overwhelm a few undergrads.

M-Air serves as a collective space that’s easy to access but built from the ground up (or rather, the air down) for safe and easy UAV testing. It’s 80 by 120 feet and five stories tall, with a covered area that can hold 25 people. There are lights and power, of course, and because it’s fully enclosed it technically counts as “indoor” testing, which is much easier to get approval for. For outdoor tests you need special authorization to ensure you won’t be messing with nearby flight paths.

We can test our system as much as we want without fear of it breaking, without fear of hurting other people,” said grad student Matthew Romano in a U-M video. “It really lets us push the boundaries and allows us to really move quickly on iterating and developing the system and testing our algorithms.”

And because it’s outside, students can even test in the lovely Michigan weather.

“With this facility, we can pursue aggressive educational and research flight projects that involve high risk of fly-away or loss-of-control — and in realistic wind, lighting and sensor conditions,” said U-M aerospace engineering professor Ella Atkins.

I feel for the neighbors, though. That buzzing is going to get annoying.

Gadgets – TechCrunch Devin Coldewey

Nvidia is temporarily stopping testing of its autonomous vehicle platform in response to last week’s fatal collision of a self-driving Uber car with a pedestrian. TechCrunch confirmed this with the company, which offered the following statement:

Ultimately [autonomous vehicles] will be far safer than human drivers, so this important work needs to continue. We are temporarily suspending the testing of our self-driving cars on public roads to learn from the Uber incident. Our global fleet of manually driven data collection vehicles continue to operate.

Reuters first reported the news.

The manually driven vehicles, to be clear, are not self-driving ones with safety drivers, but traditionally controlled vehicles with a full autonomous sensor suite on them to collect data.

Toyota also suspended its autonomous vehicle testing out of concern for its own drivers’ well-being. Uber of course ceased its testing operations at once.

Gadgets – TechCrunch Devin Coldewey

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring.

Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything.

Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.

Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.

This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.

Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.

The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.

Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.

Tesla’s Autopilot relies mostly on radar.

Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.

The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.

Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is, and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.

The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians, and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.

Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.

That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).

Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.

Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions, and the like.

It’s also arguably the most advance and closely guarded part of any self-driving car system and so is kept well under wraps.

It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high stakes matters like driving on public roads.

We’ll know more as Uber, local law enforcement, federal authorities, and others investigate the accident.

Gadgets – TechCrunch Megan Rose Dickey

When you get a new car, and you’re feeling like a star, the first thing you’re probably going to do is ghost ride it. This is where the Owl camera can come in.

I’ve been testing Owl, an always-on, two-way camera that records everything that’s happening inside and outside of your car all day, every day for the last couple of weeks.

The Owl camera is designed to monitor your car for break-ins, collisions and police stops. Owl can also be used to capture fun moments (see above) on the road or beautiful scenery, simply by saying, ‘Ok, presto.’

If Owl senses a car accident, it automatically saves the video to your phone, including the 10 seconds before and after the accident. Also, if someone is attracted to your car because of the camera and its blinking green light, and proceeds to steal it, Owl will give you another one.

For 24 hours, you can view your driving and any other incidents that happened during the day. You can also, of course, save footage to your phone so you can watch it after 24 hours.

Setting it up

The two-way camera plugs into your car’s on-board diagnostics port (Every car built after 1996 has one), and takes just a few minutes to set up. The camera tucks right in between the dashboard and windshield. Once it’s hooked up, you can access your car’s camera anytime via the Owl mobile app.

I was a bit skeptical about the ease with which I’d be able to install the camera, but it was actually pretty easy. From opening the box to getting the camera up and running, it took fewer than ten minutes.

Accessing the footage

This is where it can get a little tricky. If you want to save footage after the fact, Owl requires that you be physically near the camera. That meant I had to put on real clothes and walk outside to my car to access the footage from the past 24 hours in order to connect to the Owl’s Wi-Fi. Eventually, however, Owl says it will be possible to access that footage over LTE.

But that wasn’t my only qualm with footage access. Once I tried to download the footage, the app would often crash or only download a portion of the footage I requested. This, however, should be easily fixable, given Owl is set up for over-the-air updates. In fact, Owl told me the company is aware of that issue and is releasing a fix this week. If I want to see the live footage, though, that’s easy to access.

Notifications

Owl is set up to let you know if and when something happens to your car while you’re not there. My Owl’s out-of-the-box settings were set to high sensitivity, which meant I received notifications if a car simply drove by. Changing the settings to a lower sensitivity fixed the annoyance of too many notifications.

Since installing the Owl camera, there hasn’t been a situation in which I was notified of any nefarious behavior happening in or around my car. But I do rest assured knowing that if something does happen, I’ll be notified right away and will be able to see live footage of whatever it is that’s happening.

My understanding is that most of the dash cams on the market aren’t set up to give you 24/7 video access, nor are they designed to be updatable over the air. The best-selling dash cam on Amazon, for example, is a one-way facing camera with collision detection, but it’s not always on. That one retails for about $100 while Amazon’s Choice is one that costs just $47.99, and comes with Wi-Fi to enable real-time viewing and video playback.

Owl is much more expensive than its competition, retailing at $299, with LTE service offered at $10 per month. Currently, Owl is only available as a bundle for $349, which includes one year of the LTE service.

Unlike Owl’s competition, however, the device is always on, due to the fact it plugs into your car’s OMD port. That’s the main, most attractive differentiator for me. To be clear, while the Owl does suck energy from your car’s battery, it’s smart enough to know when it needs to shutdown. Last weekend, I didn’t drive my car for over 24 hours, so Owl shut itself down to ensure my battery wasn’t dead once I came back.

Owl, which launched last month, has $18 million in funding from Defy Ventures, Khosla Ventures, Menlo Ventures, Sherpa Capital and others. The company was founded by Andy Hodge, a former product lead at Apple and executive at Dropcam, and Nathan Ackerman, who formerly led development for Microsoft’s HoloLens.

P.S. I was listening to “Finesse” by Bruno Mars and Cardi B in the GIF above.