Gadgets – TechCrunch Devin Coldewey

Lasers! Everybody loves them, everybody wants them. But outside a few niche applications they have failed to live up to the destructive potential that Saturday morning cartoons taught us all to expect. In defiance of this failure, a company in China claims to have produced a “laser AK-47” that can burn targets in a fraction of a second from half a mile away. But skepticism is still warranted.

The weapon, dubbed the ZKZM-500, is described by the South China Morning Post as being about the size and weight of an ordinary assault rifle, but capable of firing hundreds of shots, each of which can cause “instant carbonization” of human skin.

“The pain will be beyond endurance,” added one of the researchers.

Now, there are a few red flags here. First is the simple fact that the weapon is only described and not demonstrated. Second is that what is described sounds incompatible with physics.

Laser weaponry capable of real harm has eluded the eager boffins of the world’s militaries for several reasons, none of which sound like they’ve been addressed in this research, which is long on bombast but short, at least in the SCMP article, on substance.

First there is the problem of power. Lasers of relatively low power can damage eyes easily because our eyes are among the most sensitive optical instruments ever developed on Earth. But such a laser may prove incapable of even popping a balloon. That’s because the destruction in the eye is due to an overload of light on a light-sensitive medium, while destruction of a physical body (be it a human body or, say, a missile) is due to heat.

Existing large-scale laser weapons systems powered by parallel arrays of batteries struggle to create meaningful heat damage unless trained on targets for a matter of seconds. And the power required to set a person aflame instantly from half a mile away is truly huge. Let’s just do a little napkin math here.

The article says that the gun is powered by rechargeable lithium-ion batteries, the same in principle as those in your phone (though no doubt bigger). And it is said to be capable of a thousand two-second shots, amounting to two thousand seconds, or about half an hour total. A single laser “shot” of the magnitude tested by airborne and vehicle systems is on the order of tens of kilowatts, and those have trouble causing serious damage, which is why they’ve been all but abandoned by those developing them.

Let’s just pretend they work for a second, at those power levels — they use chemical batteries to power them, since they need to be emptied far faster than lithium ion batteries will safely discharge. But let’s say even then that we could use lithium ion batteries. The Tesla Powerwall is a useful comparator: it provides a few kilowatts of power and stores a few kilowatt-hours. And… it weighs more than 200 pounds.

There’s just no way that a laser powered by a lithium-ion battery that a person could carry would be capable of producing the kind of heat described at point blank range, let alone at 800 meters.

That’s because of attenuation. Lasers, unlike bullets, scatter as they progress, making them weaker and weaker. Attenuation is non-trivial at anything beyond, say, a few dozen meters. By the time you get out to 800, the air and water the beam has traveled through enough to reduce it a fraction of its original power.

Of course there are lasers that can fire from Earth to space and vice versa — but they’re not trying to fry protestors; all that matters is that a few photons arrive at the destination and are intelligible as a signal.

I’m not saying there will never be laser weapons. But I do feel confident in saying that this prototype, ostensibly ready for mass production and deployment among China’s anti-terrorist forces, is bunk. As much as I enjoy the idea of laser rifles, the idea of one that weighs a handful of pounds and fires hundreds of instantly skin-searing shots is just plain infeasible today.

The laser project is supposedly taking place at the Xian Institute of Optics and Precision Mechanics, at the Chinese Academy of Sciences. Hopefully they give a real-world demonstration of the device soon and put me to shame.

Gadgets – TechCrunch Devin Coldewey

For many of us, clean, drinkable water comes right out the tap. But for billions it’s not that simple, and all over the world researchers are looking into ways to fix that. Today brings work from Berkeley, where a team is working on a water-harvesting apparatus that requires no power and can produce water even in the dry air of the desert. Hey, if a cactus can do it, why can’t we?

While there are numerous methods for collecting water from the air, many require power or parts that need to be replaced, what professor Omar Yaghi has developed needs neither.

The secret isn’t some clever solar concentrator or low-friction fan — it’s all about the materials. Yaghi is a chemist, and has created what’s called a metal-organic framework, or MOF, that’s eager both to absorb and release water.

It’s essentially a powder made of tiny crystals in which water molecules get caught as the temperature decreases. Then, when the temperature increases again, the water is released into the air again.

Yaghi demonstrated the process on a small scale last year, but now he and his team have published the results of a larger field test producing real-world amounts of water.

They put together a box about two feet per side with a layer of MOF on top that sits exposed to the air. Every night the temperature drops and the humidity rises, and water is trapped inside the MOF; in the morning, the sun’s heat drives the water from the powder, and it condenses on the box’s sides, kept cool by a sort of hat. The result of a night’s work: 3 ounces of water per pound of MOF used.

That’s not much more than a few sips, but improvements are already on the way. Currently the MOF uses zicronium, but an aluminum-based MOF, already being tested in the lab, will cost 99 percent less and produce twice as much water.

With the new powder and a handful of boxes, a person’s drinking needs are met without using any power or consumable material. Add a mechanism that harvests and stores the water and you’ve got yourself an off-grid potable water solution going.

“There is nothing like this,” Yaghi explained in a Berkeley news release. “It operates at ambient temperature with ambient sunlight, and with no additional energy input you can collect water in the desert. The aluminum MOF is making this practical for water production, because it is cheap.”

He says that there are already commercial products in development. More tests, with mechanical improvements and including the new MOF, are planned for the hottest months of the summer.

Gadgets – TechCrunch Devin Coldewey

A robot’s got to know its limitations. But that doesn’t mean it has to accept them. This one in particular uses tools to expand its capabilities, commandeering nearby items to construct ramps and bridges. It’s satisfying to watch but, of course, also a little worrying.

This research, from Cornell and the University of Pennsylvania, is essentially about making a robot take stock of its surroundings and recognize something it can use to accomplish a task that it knows it can’t do on its own. It’s actually more like a team of robots, since the parts can detach from one another and accomplish things on their own. But you didn’t come here to debate the multiplicity or unity of modular robotic systems! That’s for the folks at the IEEE International Conference on Robotics and Automation, where this paper was presented (and Spectrum got the first look).

SMORES-EP is the robot in play here, and the researchers have given it a specific breadth of knowledge. It knows how to navigate its environment, but also how to inspect it with its little mast-cam and from that inspection derive meaningful data like whether an object can be rolled over, or a gap can be crossed.

It also knows how to interact with certain objects, and what they do; for instance, it can use its built-in magnets to pull open a drawer, and it knows that a ramp can be used to roll up to an object of a given height or lower.

A high-level planning system directs the robots/robot-parts based on knowledge that isn’t critical for any single part to know. For example, given the instruction to find out what’s in a drawer, the planner understands that to accomplish that, the drawer needs to be open; for it to be open, a magnet-bot will have to attach to it from this or that angle, and so on. And if something else is necessary, for example a ramp, it will direct that to be placed as well.

The experiment shown in this video has the robot system demonstrating how this could work in a situation where the robot must accomplish a high-level task using this limited but surprisingly complex body of knowledge.

In the video, the robot is told to check the drawers for certain objects. In the first drawer, the target objects aren’t present, so it must inspect the next one up. But it’s too high — so it needs to get on top of the first drawer, which luckily for the robot is full of books and constitutes a ledge. The planner sees that a ramp block is nearby and orders it to be put in place, and then part of the robot detaches to climb up and open the drawer, while the other part maneuvers into place to check the contents. Target found!

In the next task, it must cross a gap between two desks. Fortunately, someone left the parts of a bridge just lying around. The robot puts the bridge together, places it in position after checking the scene, and sends its forward half rolling towards the goal.

These cases may seem rather staged, but this isn’t about the robot itself and its ability to tell what would make a good bridge. That comes later. The idea is to create systems that logically approach real-world situations based on real-world data and solve them using real-world objects. Being able to construct a bridge from scratch is nice, but unless you know what a bridge is for, when and how it should be applied, where it should be carried and how to get over it, and so on, it’s just a part in search of a whole.

Likewise, many a robot with a perfectly good drawer-pulling hand will have no idea that you need to open a drawer before you can tell what’s in it, or that maybe you should check other drawers if the first doesn’t have what you’re looking for!

Such basic problem-solving is something we take for granted, but nothing can be taken for granted when it comes to robot brains. Even in the experiment described above, the robot failed multiple times for multiple reasons while attempting to accomplish its goals. That’s okay — we all have a little room to improve.

Gadgets – TechCrunch Devin Coldewey

Microsoft’s HoloLens has an impressive ability to quickly sense its surroundings, but limiting it to displaying emails or game characters on them would show a lack of creativity. New research shows that it works quite well as a visual prosthesis for the vision impaired, not relaying actual visual data but guiding them in real time with audio cues and instructions.

The researchers, from CalTech and University of Southern California, first argue that restoring vision is at present simply not a realistic goal, but that replacing the perception portion of vision isn’t necessary to replicate the practical portion. After all, if you can tell where a chair is, you don’t need to see it to avoid it, right?

Crunching visual data and producing a map of high-level features like walls, obstacles, and doors is one of the core capabilities of the HoloLens, so the team decided to to let it do its thing and recreate the environment for the user from these extracted features.

They designed the system around sound, naturally. Every major object and feature can tell the user where it is, either via voice or sound. Walls, for instance, hiss (presumably a white noise, not a snake hiss) as the user approaches them. And the user can scan the scene, with objects announcing themselves from left to right from the direction in which they are located. A single object can be selected and will repeat its callout to help the user find it.

That’s all well for stationary tasks like finding your cane or the couch in a friend’s house. But the system also works in motion.

The team recruited seven blind people to test it out. They were given a brief intro but no training, and then asked to accomplish a variety of tasks. The users could reliably locate and point to objects from audio cues, and were able to find a chair in a room in a fraction of the time they normally would, and avoid obstacles easily as well.

This render shows the actual paths taken by the users in the navigation tests.

Then they were tasked with navigating from the entrance of a building to a room on the second floor by following the headset’s instructions. A “virtual guide” repeatedly says “follow me” from an apparent distance of a few feet ahead, while also warning when stairs were coming, where handrails were, and when the user had gone off course.

All seven users got to their destinations on the first try, and much more quickly than if they had had to proceed normally with no navigation. One subject, the paper notes, said “That was fun! When can I get one?”

Microsoft actually looked into something like this years ago, but the hardware just wasn’t there — HoloLens changes that. Even though it is clearly intended for use by sighted people, its capabilities naturally fill the requirements for a visual prosthesis like the one described here.

Interestingly, the researchers point out that this type of system was also predicted more than 30 years ago, long before they were even close to possible:

“I strongly believe that we should take a more sophisticated approach, utilizing the power of artificial intelligence for processing large amounts of detailed visual information in order to substitute for the missing functions of the eye and much of the visual pre-processing performed by the brain,” wrote the clearly far-sighted C.C. Collins way back in 1985.

The potential for a system like this is huge, but this is just a prototype. As systems like HoloLens get lighter and more powerful, they’ll go from lab-bound oddities to everyday items — one can imagine the front desk at a hotel or mall stocking a few to give to vision-impaired folks who need to find their room or a certain store.

“By this point we expect that the reader already has proposals in mind for enhancing the cognitive prosthesis,” they write. “A hardware/software platform is now available to rapidly implement those ideas and test them with human subjects. We hope that this will inspire developments to enhance perception for both blind and sighted people, using augmented auditory reality to communicate things that we cannot see.”

Gadgets – TechCrunch Devin Coldewey

It’s not enough in this day and age that we have to deal with fake news, we also have to deal with fake prescription drugs, fake luxury goods, and fake Renaissance-era paintings. Sometimes all at once! IBM’s Verifier is a gadget and platform made (naturally) to instantly verify that something is what it claims to be, by inspecting it at a microscopic level.

Essentially you stick a little thing on your phone’s camera, open the app, and put the sensor against what you’re trying to verify, be it a generic antidepressant or an ore sample. By combining microscopy, spectroscopy, and a little bit of AI, the Verifier compares what it sees to a known version of the item and tells you whether they’re the same.

The key component in this process is an “optical element” that sits in front of the camera (it can be anything that takes a decent image) amounting to a specialized hyper-macro lens. It allows the camera to detect features as small as a micron — for comparison, a human hair is usually a few dozen microns wide.

At the micron level there are patterns and optical characteristics that aren’t visible to the human eye, like precisely which wavelengths of light it reflects. The quality of a weave, the number of flaws in a gem, the mixture of metals in an alloy… all stuff you or I would miss, but a machine learning system trained on such examples will pick out instantly.

For instance a counterfeit pill, although orange and smooth and imprinted just like a real one if one were to just look at it, will likely appear totally different at the micro level: textures and structures with a very distinct pattern, or at least distinct from the real thing — not to mention a spectral signature that’s probably way different. There’s also no reason it can’t be used on things like expensive wines or oils, contaminated water, currency, and plenty of other items.

IBM was eager to highlight the AI element, which is trained on the various patterns and differentiates between them, though as far as I can tell it’s a pretty straightforward classification task. I’m more impressed by the lens they put together that can resolve at a micron level with so little distortion and not exclude or distort the colors too much. It even works on multiple phones — you don’t have to have this or that model.

The first application IBM is announcing for its Verifier is as a part of the diamond trade, which is of course known for fetishizing the stones and their uniqueness, and also establishing elaborate supply trains to ensure product is carefully controlled. The Verifier will be used as an aide for grading stones, not on its own but as a tool for human checkers; it’s a partnership with the Gemological Institute of America, which will test integrating the tool into its own workflow.

By imaging the stone from several angles, the individual identity of the diamond can be recorded and tracked as well, so that its provenance and trail through the industry can be tracked over the years. Here IBM imagines blockchain will be useful, which is possible but not exactly a given.

It’ll be a while before you can have one of your own, but here’s hoping this type of tech becomes popular enough that you can check the quality or makeup of something at least without having to visit some lab.

Gadgets – TechCrunch Devin Coldewey

Making something fly involves a lot of trade-offs. Bigger stuff can hold more fuel or batteries, but too big and the lift required is too much. Small stuff takes less lift to fly but might not hold a battery with enough energy to do so. Insect-sized drones have had that problem in the past — but now this RoboFly is taking its first flaps into the air… all thanks to the power of lasers.

We’ve seen bug-sized flying bots before, like the RoboBee, but as you can see it has wires attached to it that provide power. Batteries on board would weigh it down too much, so researchers have focused in the past on demonstrating that flight is possible in the first place at that scale.

But what if you could provide power externally without wires? That’s the idea behind the University of Washington’s RoboFly, a sort of spiritual successor to the RoboBee that gets its power from a laser trained on an attached photovoltaic cell.

“It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight,” said co-author of the paper describing the bot, Shyam Gollakota. He’s obviously very concerned with power efficiency — last month he and his colleagues published a way of transmitting video with 99 percent less power than usual.

There’s more than enough power in the laser to drive the robot’s wings; it gets adjusted to the correct voltage by an integrated circuit, and a microcontroller sends that power to the wings depending on what they need to do. Here it goes:

“To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction,” explained lead author Johannes James.

At present the bot just takes off, travels almost no distance and lands — but that’s just to prove the concept of a wirelessly powered robot insect (it isn’t obvious). The next steps are to improve onboard telemetry so it can control itself, and make a steered laser that can follow the little bug’s movements and continuously beam power in its direction.

The team is headed to Australia next week to present the RoboFly at the International Conference on Robotics and Automation in Brisbane.

Gadgets – TechCrunch Devin Coldewey

The InSight launch earlier this month had a couple of stowaways: a pair of tiny CubeSats that are already the farthest such tiny satellites have ever been from Earth — by a long shot. And one of them got a chance to snap a picture of their home planet as an homage to the Voyager mission’s famous “Pale Blue Dot.” It’s hardly as amazing a shot as the original, but it’s still cool.

The CubeSats, named MarCO-A and B, are an experiment to test the suitability of pint-size craft for exploration of the solar system; previously they have only ever been deployed into orbit.

That changed on May 5, when the InSight mission took off, with the MarCO twins detaching on a similar trajectory to the geology-focused Mars lander. It wasn’t long before they went farther than any CubeSat has gone before.

A few days after launch MarCO-A and B were about a million kilometers (621,371 miles) from Earth, and it was time to unfold its high-gain antenna. A fisheye camera attached to the chassis had an eye on the process and took a picture to send back home to inform mission control that all was well.

But as a bonus (though not by accident — very few accidents happen on missions like this), Earth and the moon were in full view as MarCO-B took its antenna selfie. Here’s an annotated version of the one above:

“Consider it our homage to Voyager,” said JPL’s Andy Klesh in a news release. “CubeSats have never gone this far into space before, so it’s a big milestone. Both our CubeSats are healthy and functioning properly. We’re looking forward to seeing them travel even farther.”

So far it’s only good news and validation of the idea that cheap CubeSats could potentially be launched by the dozen to undertake minor science missions at a fraction of the cost of something like InSight.

Don’t expect any more snapshots from these guys, though. A JPL representative told me the cameras were really only included to make sure the antenna deployed properly. Really any pictures of Mars or other planets probably wouldn’t be worth looking at twice — these are utility cameras with fisheye lenses, not the special instruments that orbiters use to get those great planetary shots.

The MarCOs will pass by Mars at the same time that InSight is making its landing, and depending on how things go, they may even be able to pass on a little useful info to mission control while it happens. Tune in on November 26 for that!

D3bris Online Magazine Deb

Have A Look At What The Future Has in Store From The Google I/O 2018 keynote (in 14 minutes)

The Google I/O 2018 keynote had a bunch of major announcements about Android P, Google Assistant, and more. Here’s the most important news to know…

 

Smart Compose in Gmail

This is a nifty new feature in Gmail that uses machine learning to not just predict words users plan to type, but entire phrases. And we’re not just talking about simple predictions like addresses, but entire phrases that are suggested based on context and user history. The feature will roll out to users in the next month.

Google Photos AI features

Google Photos is getting a ton of new features based on artificial intelligence and machine learning. For example, Google Photos can take an old restored black and white photo and not just convert it to color, but convert it to realistic color and touch it up in the process.

Google Assistant voices

The original Google Assistant voice was named Holly, and it was based on actual recordings. Moving forward, Google Assistant will get six new voices… including John Legend! Google is using WaveNet to make voices more realistic, and it hopes to ultimately perfect all accents and languages around the world. Google Assistant will support 30 different languages by the end of 2018.

Natural conversation

Google is making a ton of upgrades to Google Assistant revolving around natural conversation. For one, conversations can continue following an initial wake command (“Hey Google”). The new feature is called continued conversation and it’ll be available in the coming weeks.

Multiple Actions support is coming to Google Assistant as well, allowing Google Assistant to handle multiple commands at one time.

Another new feature called “Pretty Please” will help young children learn politeness by responding with positive reinforcement when children say please. The feature will roll out later this year.

New visual canvas for Google Assistant

The first Smart Displays will be released in July, powered by Google Assistant. In order to power the experiences provided by Smart Displays, Google had to whip up a new visual interface for Assistant.

Also of note, Google Assistant’s visual UI is getting an overhaul on mobile devices as well in 2018.

Swiping up in the Google app will show a snapshot of the user’s entire day courtesy of Google Assistant. The new UI is coming to Android this summer and to iOS later this year.

Google Duplex

Using text to speech, deep learning, AI, and more, Google Assistant can be a real assistant. In a demo at I/O 2018, Google Assistant made a real call to a hair salon and had a back and forth conversation with an employee, ultimately booking an actual woman’s haircut appointment in the time span requested by the user.

This is not a feature that will roll out anytime soon, but it’s something Google is working hard to develop for both business and consumers. An initial version of the service that will call businesses to get store hours will roll out in the coming weeks, and the data collected will allow Google to update open and close hours under company profiles online.

Here’s a demo video:

TOP NEWS IN TEXT

Google News

[ARTICLE BELOW CITED FROM BGR.COM ]Google News is getting an overhaul that focuses on highlighting quality journalism. The revamp will make it easier for users to keep up with the news by showing a briefing at the top with five important stories. Local news will be highlighted as well, and the Google News app will constantly evolve and learn a user’s preferences as he or she uses the app.

Videos from YouTube and elsewhere will be showcased more prominently, and a new feature called Newscasts are like Instagram stories, but for news.

The refreshed Google News will also take steps to help users understand the full scope of a story, showcasing a variety of sources and formats. The new feature, which is called “Full Coverage,” will also help by providing related stories, background, timelines of key related events, and more.

Finally, a new Newsstand section lets users follow specific publications, and they can even subscribe to paid news services right inside the app. Paid subscriptions will make content available not just in the Google News app, but on the publisher’s website and elsewhere as well.

The updated Google News app is rolling out on the web, iOS, and Android beginning today, and it will be completely rolled out by the end of next week.

Android P

Google had already released the first build of Android P for developers, but on Tuesday the company discussed a number of new Android P features that fall into three core categories.

Intelligence

Google partnered with DeepMind to create a feature called Adaptive Battery. It uses machine learning to determine which apps you use frequently and which ones you use only sporadically, and it restricts background processes for seldom used apps in order to save battery live.

Another new feature called Adaptive Brightness learns a user’s brightness preferences in different ambient lighting scenarios to improve auto-brightness settings.

App Actions is a new feature in Android P that predicts actions based on a user’s usage patterns. It helps users get to their next task more quickly. For example, if you search for a movie in Google, you might get an App Action that offers to open Fandango so you can buy tickets.

Slices is another new feature that allows developers to take a small piece of their apps — or “slice” — that can be rendered in different places. For example, a Google search for hotels might open a slice from Booking.com that lets users begin the booking process without leaving the search screen. Think of it as a widget, but inside another app instead of on the home screen.

Simplicity

Google wants to help technology fade to the background so that it gets out of the user’s way.

First, Android P’s navigation has been overhauled. Swipe up on a small home button at the bottom and a new app switcher will open. Swipe up again and the app drawer will open. The new app switcher is now horizontal, and it looks a lot like the iPhone app switcher in iOS 11.

Also appreciated is a new rotation button that lets users choose which apps can auto-rotate and which ones cannot.

Digital wellbeing

Android P brings some important changes to Android that focus on wellbeing.

There’s a new dashboard that shows users exactly how their spent their day on their phone. It’ll show you which apps you use and for how long, and it provides other important info as well. Controls will be available to help users limit the amount of time they spend in certain apps.

An enhanced Do Not Disturb mode will stop visual notifications as well as audio notifications and vibrations. There’s also a new “shush” feature that automatically enables Do Not Disturb when a phone is turned face down on a table. Important contacts will still be able to call even when the new Do Not Disturb mode is enabled.

There’s also a new wind-down mode that fades the display to grayscale when someone uses his or her phone late at night before bed.

Google announced a new Android P Beta program just like Apple’s public iOS beta program. It allows end users to try Android P on their phones beginning today.

Google Maps

A new “For You” tab in Google Maps shows you new businesses in your area as well as restaurants that are trending around you. Google also added a new “Your Match” score to display the likelihood of you liking a new restaurant based on your historical ratings.

Have trouble choosing a restaurant when you go out in a group? A long-press on any restaurant will add it to a new short list, and you can then share that list with friends. They can add other options, and the group can then choose a restaurant from the group list.

These new features will roll out to Maps this summer.

Computer Vision

A bit further down the road, Google is working on a fascinating new feature that combines computer vision courtesy of the camera with Google Maps Street View to create an AR experience in Google Maps.

Google Lens is also coming to additional devices in the coming weeks, and there are new features coming as well.

Lens can now understand words, and you can copy and paste words on a sign or a piece of paper to the phone’s clipboard. You can also get context — for example, Google Lens can see a dish on a menu and tell you the ingredients.

A new shopping features let you point your camera at an item to get prices and reviews. And finally, Google Lens now works in real time to constant scan items in the camera frame to give information and, soon, to overlay live results on items in your camera’s view.

BONUS: Waymo self-driving taxi service

Waymo is the only company that currently has a fleet of fully self-driving cars with no driver needed in the driver’s seat. On Tuesday, Waymo announced a new self-driving taxi service that will soon launch in Phoenix, Arizona. Customers will be able to hail autonomous cars with no one in the driver’s seat, and use those self-driving cars to travel to any local destination. The service will launch before the end of 2018.

The post Have A Look At What The Future Has in Store From The Google I/O 2018 keynote (in 14 minutes) appeared first on D3bris Online Magazine.

Gadgets – TechCrunch Devin Coldewey

NASA’s latest mission to Mars, Insight, is set to launch early Saturday morning in pursuit of a number of historic firsts in space travel and planetology. The lander’s instruments will probe the surface of the planet and monitor its seismic activity with unprecedented precision, while a pair of diminutive cubesats riding shotgun will test the viability of tiny spacecraft for interplanetary travel.

Saturday at 4:05 AM Pacific is the first launch opportunity, but if weather forbids it, they’ll just try again soon after — the chances of clouds sticking around all the way until June 8, when the launch window closes, are slim to none.

Insight isn’t just a pretty name they chose; it stands for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, at least after massaging the acronym a bit. Its array of instruments will teach us about the Martian interior, granting us insight (see what they did there?) into the past and present of Mars and the other rocky planets in the solar system, including Earth.

Bruce Banerdt, principal investigator for the mission at NASA’s Jet Propulsion Laboratory, has been pushing for this mission for more than two decades, after practically a lifetime working at the place.

“This is the only job I’ve ever had in my life other than working in the tire shop during the summertime,” he said in a recent NASA podcast. He’s worked on plenty of other missions, of course, but his dedication to this one has clearly paid off. It was actually originally scheduled to launch in 2016, but some trouble with an instrument meant they had to wait until the next launch window — now.

Insight is a lander in the style of Phoenix, about the size of a small car, and shot towards Mars faster than a speeding bullet. The launch is a first in itself: NASA has never launched an interplanetary mission from the West coast, but conditions aligned in this case making California’s Vandenberg air base the best option. It doesn’t even require a gravity assist to get where it’s going.

“Instead of having to go to Florida and using the Earth’s rotation to help slingshot us into orbit… We can blast our way straight out,” Banerdt said in the same podcast. “Plus we get to launch in a way that is gonna be visible to maybe 10 million people in Southern California because this rocket’s gonna go right by LA, right by San Diego. And if people are willing to get up at four o’clock in the morning, they should see a pretty cool light show that day.”

The Atlas V will take it up to orbit and the Centaur will give it its push towards Mars, after which it will cruise for six months or so, arriving late in the Martian afternoon on November 26 (Earth calendar).

Its landing will be as exciting (and terrifying) as Phoenix’s and many others. When it hits the Martian atmosphere, Insight will be going more than 13,000 MPH. It’ll slow down first using the atmosphere itself, losing 90 percent of its velocity as friction against a new, reinforced heat shield. A parachute takes off another 90 percent, but it’ll still be going over 100 MPH, which would make for an uncomfortable landing. So a couple thousand feet up it will transition to landing jets that will let it touch down at a stately 5.4 MPH at the desired location and orientation.

After the dust has settled (literally) and the lander has confirmed everything is in working order, it will deploy its circular, fanlike solar arrays and get to work.

Robot arms and self-hammering robomoles

Insight’s mission is to get into the geology of Mars with more detail and depth than ever before. To that end it is packing gear for three major experiments.

SEIS is a collection of six seismic sensors (making the name a tidy bilingual, bidirectional pun) that will sit on the ground under what looks like a tiny Kingdome and monitor the slightest movement of the ground underneath. Tiny high-frequency vibrations or longer-period oscillations, they should all be detected.

“Seismology is the method that we’ve used to gain almost everything we know, all the basic information about the interior of the Earth, and we also used it back during the Apollo era to understand and to measure sort of the properties of the inside of the moon,” Banerdt said. “And so, we want to apply the same techniques but use the waves that are generated by Mars quakes, by meteorite impacts to probe deep into the interior of Mars all the way down to its core.”

The heat flow and physical properties probe is an interesting one. It will monitor the temperature of the planet below the surface continually for the duration of the mission — but in order to do so, of course, it has to dig its way down. For that purpose it’s installed with what the team calls a “self-hammering mechanical mole.” Pretty self-explanatory, right?

The “mole” is sort of like a hollow, inch-thick, 16-inch-long nail that will use a spring-loaded tungsten block inside itself to drive itself into the rock. It’s estimated that it will take somewhere between 5,000 and 20,000 strikes to get deep enough to escape the daily and seasonal temperature changes at the surface.

Lastly there’s the Rotation and Interior Structure Experiment, which actually doesn’t need a giant nail, a tiny Kingdome, or anything like that. The experiment involves tracking the position of Insight with extreme precision as Mars rotates, using its radio connection with Earth. It can be located to within about four inches, which when you think about it is pretty unbelievable to begin with. The way that position varies may indicate a wobble in the planet’s rotation and consequently shed light on its internal composition. Combined with data from similar experiments in the ’70s and ’90s, it should let planetologists determine how molten the core is.

“In some ways, InSight is like a scientific time machine that will bring back information about the earliest stages of Mars’ formation 4.5 billion years ago,” said Banerdt in an earlier news release. “It will help us learn how rocky bodies form, including Earth, its moon, and even planets in other solar systems.”

In another space first, Insight has a robotic arm that will not just do things like grab rocks to look at, but will grab items from its own inventory and deploy them into its workspace. Its little fingers will grab handles on top of each deployable instrument and grab it just like a human might. Well, maybe a little differently, but the principle is the same. At nearly 8 feet long, it has a bit more reach than the average astronaut.

Cubes riding shotgun

One of the MarCO cubesats.

Insight is definitely the main payload, but it’s not the only one. Launching on the same rocket are two cubesats, known collectively as Mars Cube One, or MarCO. These “briefcase-size” guys will separate from the rocket around the same time as Insight, but take slightly different trajectories. They don’t have the control to adjust their motion and enter an orbit, so they’ll just zoom by Mars right as Insight is landing.

Cubesats launch all the time, though, right? Sure — into Earth orbit. This will be the first attempt to send Cubesats to another planet. If successful there’s no limit to what could be accomplished — assuming you don’t need to pack anything bigger than a breadbox.

The spacecraft aren’t carrying any super-important experiments; there are two in case one fails, and both are only equipped with UHF antennas to send and receive data, and a couple low-resolution visible-light cameras. The experiment here is really the cubesats themselves and this launch technique. If they make it to Mars, they might be able to help send Insight’s signal home, and if they keep operating beyond that, it’s just icing on the cake.

You can follow along with Insight’s launch here; there’s also the traditional anthropomorphized Twitter account. We’ll post a link to the live stream as soon as it goes up.

Gadgets – TechCrunch Devin Coldewey

It goes without saying that getting dressed is one of the most critical steps in our daily routine. But long practice has made it second nature, and people suffering from dementia may lose that familiarity, making dressing a difficult and frustrating process. This smart dresser from NYU is meant to help them through the process while reducing the load on overworked caregivers.

It may seem that replacing responsive human help with a robotic dresser is a bit insensitive. But not only are there rarely enough caregivers to help everyone in a timely manner at, say, a nursing care facility, the residents themselves might very well prefer the privacy and independence conferred by such a solution.

“Our goal is to provide assistance for people with dementia to help them age in place more gracefully, while ideally giving the caregiver a break as the person dresses – with the assurance that the system will alert them when the dressing process is completed or prompt them if intervention is needed,” explained the project’s leader, Winslow Burleson, in an NYU news release.

DRESS, as the team calls the device, is essentially a five-drawer dresser with a tablet on top that serves as both display and camera, monitoring and guiding the user through the dressing process.

There are lots of things that can go wrong when you’re putting on your clothes, and really only one way it can go right — shirts go on right side out and trousers forwards, socks on both feet, etc. That simplifies the problem for DRESS, which looks for tags attached to the clothes to make sure they’re on right and in order, making sure someone doesn’t attempt to put on their shoes before their trousers. Lights on each drawer signal the next item of clothing to don.

If there’s any problem — the person can’t figure something out, can’t find the right drawer or gets distracted, for instance — the caregiver is alerted and will come help. But if all goes right, the person will have dressed themselves all on their own, something that might not have been possible before.

DRESS is just a prototype right now, a proof of concept to demonstrate its utility. The team is looking into improving the vision system, standardizing clothing folding and enlarging or otherwise changing the coded tags on each item.