Gadgets – TechCrunch Devin Coldewey

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Gadgets – TechCrunch Devin Coldewey

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so that the heat shield is facing inwards and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation, and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, but not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that 7 more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last 7 years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Gadgets – TechCrunch Devin Coldewey

I love camping, but there’s always an awkward period when you’ve left the tent but haven’t yet created coffee that I hate camping. It’s hard not to watch the pot not boil and not want to just go back to bed, but since the warm air escaped when I opened the tent it’s pointless! Anyway, the Swiss figured out a great way to boil water faster, and I want one of these sweet stoves now.

The PeakBoil stove comes from design students at ETH Zurich, who have clearly faced the same problems as myself. But since they actually camp in inclement weather, they also have to deal with wind blowing out the feeble flame of an ordinary gas burner.

Their attempt to improve on the design takes the controversial step of essentially installing a stovepipe inside the vessel and heating it from the inside out rather than from the bottom up. This has been used in lots of other situations to heat water but it’s the first time I’ve seen it in a camp stove.

By carefully configuring the gas nozzles and adding ripples to the wall of the heat pipe, PeakBoil “increases the contact area between the flame and the jug,” explained doctoral student and project leader Julian Ferchow in an ETH Zurich news release.

“That, plus the fact that the wall is very thin, makes heat transfer to the contents of the jug ideal,” added his colleague Patrick Beutler.

Keeping the flames isolated inside the chimney behind baffles minimizes wind interference with the flames, and prevents you having to burn extra gas to keep it alive.

The design was created using a selective laser melting or sintering process, in which metal powder is melted in a pattern much like a 3D printer lays down heated plastic. It’s really just another form of additive manufacturing, and it gave the students “a huge amount of design freedom…with metal casting, for instance, we could never achieve channels that are as thin as the ones inside our gas burner,” Ferchow said.

Of course, the design means it’s pretty much only usable for boiling water (you wouldn’t want to balance a pan on top of it), but that’s such a common and specific use case that many campers already have a stove dedicated to the purpose.

The team is looking to further improve the design and also find an industry partner with which to take it to market. MSR, GSI, REI… I’m looking at you. Together we can make my mornings bearable.

Gadgets – TechCrunch Devin Coldewey

Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.

The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.

Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.

The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.

“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”

The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.

“In our experience, this project takes no less than 200 person-hours to build, and depending on the familiarity and skill level of those involved could be significantly more,” the project’s creators write on the GitHub page.

So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.

There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.

“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”

Gadgets – TechCrunch Devin Coldewey

Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.

Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.

The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.

The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)

In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.

They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.

The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.

What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.

This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.

As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Gadgets – TechCrunch Devin Coldewey

A multi-year NASA contest to design a 3D-printable Mars habitat using on-planet materials has just hit another milestone — and a handful of teams have taken home some cold hard cash. This more laid-back phase had contestants designing their proposed habitat using architectural tools, with the five winners set to build scale models next year.

Technically this is the first phase of the third phase — the (actual) second phase took place last year and teams took home quite a bit of money.

The teams had to put together realistic 3D models of their proposed habitats, and not just in Blender or something. They used Building Information Modeling software that would require these things to be functional structures designed down to a particular level of detail — so you can’t just have 2D walls made of “material TBD,” and you have to take into account thickness from pressure sealing, air filtering elements, heating, etc.

The habitats had to have at least a thousand square feet of space, enough for four people to live for a year, along with room for the machinery and paraphernalia associated with, you know, living on Mars. They must be largely assembled autonomously, at least enough that humans can occupy them as soon as they land. They were judged on completeness, layout, 3D-printing viability, and aesthetics.

[gallery ids="1681791,1681792,1681829,1681793,1681794,1681828,1681795"]

So although the images you see here look rather sci-fi, keep in mind they were also designed using industrial tools and vetted by experts with “a broad range of experience from Disney to NASA.” These are going to Mars, not paperback. And they’ll have to be built in miniature for real next year, so they better be realistic.

The five winning designs embody a variety of approaches. Honestly all these videos are worth a watch; you’ll probably learn something cool, and they really give an idea of how much thought goes into these designs.

Zopherus has the whole print taking place inside the body of a large lander, which brings its own high-strength printing mix to reinforce the “Martian concrete” that will make up the bulk of the structure. When it’s done printing and embedding the pre-built items like airlocks, it lifts itself up, moves over a few feet, and does it again, creating a series of small rooms. (They took first place and essentially tied the next team for take-home case, a little under $21K.)

AI SpaceFactory focuses on the basic shape of the vertical cylinder as both the most efficient use of space and also one of the most suitable for printing. They go deep on the accommodations for thermal expansion and insulation, but also have thought deeply about how to make the space safe, functional, and interesting. This one is definitely my favorite.

Kahn-Yates has a striking design, with a printed structural layer giving way to a high-strength plastic layer that lets the light in. Their design is extremely spacious but in my eyes not very efficiently allocated. Who’s going to bring apple trees to Mars? Why have a spiral staircase with such a huge footprint? Still, if they could pull it off, this would allow for a lot of breathing room, something that will surely be of great value during year or multi-year stay on the planet.

SEArch+/Apis Cor has carefully considered the positioning and shape of its design to maximize light and minimize radiation exposure. There are two independent pressurized areas — everyone likes redundancy — and it’s built using a sloped site, which may expand the possible locations. It looks a little claustrophobic, though.

Northwestern University has a design that aims for simplicity of construction: an inflatable vessel provides the base for the printer to create a simple dome with reinforcing cross-beams. This practical approach no doubt won them points, and the inside, while not exactly roomy, is also practical in its layout. As AI SpaceFactory pointed out, a dome isn’t really the best shape (lots of wasted space) but it is easy and strong. A couple of these connected at the ends wouldn’t be so bad.

The teams split a total of $100K for this phase, and are now moving on to the hard part: actually building these things. In spring of 2019 they’ll be expected to have a working custom 3D printer that can create a 1:3 scale model of their habitat. It’s difficult to say who will have the worst time of it, but I’m thinking Kahn-Yates (that holey structure will be a pain to print) and SEArch+/Apis (slope, complex eaves and structures).

The purse for the real-world construction is an eye-popping $2 million, so you can bet the competition will be fierce. In the meantime seriously watch those videos above, they’re really interesting.

Gadgets – TechCrunch Devin Coldewey

Creatures that live in the depths of the oceans are often extremely fragile, making their collection a difficult affair. A new polyhedral sample collection mechanism acts like an “underwater Pokéball,” allowing scientists to catch ’em all without destroying their soft, squishy bodies in the process.

The ball is technically a dodecahedron that closes softly around the creature in front of it. It’s not exactly revolutionary except in that it is extremely simple mechanically — at depths of thousands of feet, the importance of this can’t be overstated — and non-destructive.

Sampling is often done via a tube with moving caps on both ends into which the creature must be guided and trapped, or a vacuum tube that sucks it in, which as you can imagine is at best unpleasant for the target and at worst, lethal.

The rotary actuated dodecahedron, or RAD, has five 3D-printed “petals” with a complex-looking but mechanically simple framework that allows them to close up simultaneously from force applied at a single point near the rear panel.

“I was building microrobots by hand in graduate school, which was very painstaking and tedious work,” explained creator Zhi Ern Teoh, of Harvard’s Wyss Institute, “and I wondered if there was a way to fold a flat surface into a three-dimensional shape using a motor instead.”

The answer is yes, obviously, since he made it; the details are published in Science Robotics. Inspired by origami and papercraft, Teoh and his colleagues applied their design knowledge to creating not just a fold-up polyhedron (you can cut one out of any sheet of paper) but a mechanism that would perform that folding process in one smooth movement. The result is the network of hinged arms around the polyhedron tuned to push lightly and evenly and seal it up.

In testing, the RAD successfully captured some moon jellies in a pool, then at around 2,000 feet below the ocean surface was able to snag squid, octopus, and wild jellies and release them again with no harm done. They didn’t capture the octopus on camera, but apparently it was curious about the device.

Because of the RAD’s design, it would work just as well miles below the surface, the researchers said, though they haven’t had a chance to test that yet.

“The RAD sampler design is perfect for the difficult environment of the deep ocean because its controls are very simple, so there are fewer elements that can break,” Teoh said.

There’s also no barrier to building a larger one, or a similar device that would work in space, he pointed out. As for current applications like sampling of ocean creatures, the setup could easily be enhanced with cameras and other tools or sensors.

“In the future, we can capture an animal, collect lots of data about it like its size, material properties, and even its genome, and then let it go,” said co-author David Gruber, from CUNY. “Almost like an underwater alien abduction.”

Gadgets – TechCrunch Devin Coldewey

Lasers! Everybody loves them, everybody wants them. But outside a few niche applications they have failed to live up to the destructive potential that Saturday morning cartoons taught us all to expect. In defiance of this failure, a company in China claims to have produced a “laser AK-47” that can burn targets in a fraction of a second from half a mile away. But skepticism is still warranted.

The weapon, dubbed the ZKZM-500, is described by the South China Morning Post as being about the size and weight of an ordinary assault rifle, but capable of firing hundreds of shots, each of which can cause “instant carbonization” of human skin.

“The pain will be beyond endurance,” added one of the researchers.

Now, there are a few red flags here. First is the simple fact that the weapon is only described and not demonstrated. Second is that what is described sounds incompatible with physics.

Laser weaponry capable of real harm has eluded the eager boffins of the world’s militaries for several reasons, none of which sound like they’ve been addressed in this research, which is long on bombast but short, at least in the SCMP article, on substance.

First there is the problem of power. Lasers of relatively low power can damage eyes easily because our eyes are among the most sensitive optical instruments ever developed on Earth. But such a laser may prove incapable of even popping a balloon. That’s because the destruction in the eye is due to an overload of light on a light-sensitive medium, while destruction of a physical body (be it a human body or, say, a missile) is due to heat.

Existing large-scale laser weapons systems powered by parallel arrays of batteries struggle to create meaningful heat damage unless trained on targets for a matter of seconds. And the power required to set a person aflame instantly from half a mile away is truly huge. Let’s just do a little napkin math here.

The article says that the gun is powered by rechargeable lithium-ion batteries, the same in principle as those in your phone (though no doubt bigger). And it is said to be capable of a thousand two-second shots, amounting to two thousand seconds, or about half an hour total. A single laser “shot” of the magnitude tested by airborne and vehicle systems is on the order of tens of kilowatts, and those have trouble causing serious damage, which is why they’ve been all but abandoned by those developing them.

Let’s just pretend they work for a second, at those power levels — they use chemical batteries to power them, since they need to be emptied far faster than lithium ion batteries will safely discharge. But let’s say even then that we could use lithium ion batteries. The Tesla Powerwall is a useful comparator: it provides a few kilowatts of power and stores a few kilowatt-hours. And… it weighs more than 200 pounds.

There’s just no way that a laser powered by a lithium-ion battery that a person could carry would be capable of producing the kind of heat described at point blank range, let alone at 800 meters.

That’s because of attenuation. Lasers, unlike bullets, scatter as they progress, making them weaker and weaker. Attenuation is non-trivial at anything beyond, say, a few dozen meters. By the time you get out to 800, the air and water the beam has traveled through enough to reduce it a fraction of its original power.

Of course there are lasers that can fire from Earth to space and vice versa — but they’re not trying to fry protestors; all that matters is that a few photons arrive at the destination and are intelligible as a signal.

I’m not saying there will never be laser weapons. But I do feel confident in saying that this prototype, ostensibly ready for mass production and deployment among China’s anti-terrorist forces, is bunk. As much as I enjoy the idea of laser rifles, the idea of one that weighs a handful of pounds and fires hundreds of instantly skin-searing shots is just plain infeasible today.

The laser project is supposedly taking place at the Xian Institute of Optics and Precision Mechanics, at the Chinese Academy of Sciences. Hopefully they give a real-world demonstration of the device soon and put me to shame.

Gadgets – TechCrunch Devin Coldewey

For many of us, clean, drinkable water comes right out the tap. But for billions it’s not that simple, and all over the world researchers are looking into ways to fix that. Today brings work from Berkeley, where a team is working on a water-harvesting apparatus that requires no power and can produce water even in the dry air of the desert. Hey, if a cactus can do it, why can’t we?

While there are numerous methods for collecting water from the air, many require power or parts that need to be replaced, what professor Omar Yaghi has developed needs neither.

The secret isn’t some clever solar concentrator or low-friction fan — it’s all about the materials. Yaghi is a chemist, and has created what’s called a metal-organic framework, or MOF, that’s eager both to absorb and release water.

It’s essentially a powder made of tiny crystals in which water molecules get caught as the temperature decreases. Then, when the temperature increases again, the water is released into the air again.

Yaghi demonstrated the process on a small scale last year, but now he and his team have published the results of a larger field test producing real-world amounts of water.

They put together a box about two feet per side with a layer of MOF on top that sits exposed to the air. Every night the temperature drops and the humidity rises, and water is trapped inside the MOF; in the morning, the sun’s heat drives the water from the powder, and it condenses on the box’s sides, kept cool by a sort of hat. The result of a night’s work: 3 ounces of water per pound of MOF used.

That’s not much more than a few sips, but improvements are already on the way. Currently the MOF uses zicronium, but an aluminum-based MOF, already being tested in the lab, will cost 99 percent less and produce twice as much water.

With the new powder and a handful of boxes, a person’s drinking needs are met without using any power or consumable material. Add a mechanism that harvests and stores the water and you’ve got yourself an off-grid potable water solution going.

“There is nothing like this,” Yaghi explained in a Berkeley news release. “It operates at ambient temperature with ambient sunlight, and with no additional energy input you can collect water in the desert. The aluminum MOF is making this practical for water production, because it is cheap.”

He says that there are already commercial products in development. More tests, with mechanical improvements and including the new MOF, are planned for the hottest months of the summer.

Gadgets – TechCrunch Devin Coldewey

A robot’s got to know its limitations. But that doesn’t mean it has to accept them. This one in particular uses tools to expand its capabilities, commandeering nearby items to construct ramps and bridges. It’s satisfying to watch but, of course, also a little worrying.

This research, from Cornell and the University of Pennsylvania, is essentially about making a robot take stock of its surroundings and recognize something it can use to accomplish a task that it knows it can’t do on its own. It’s actually more like a team of robots, since the parts can detach from one another and accomplish things on their own. But you didn’t come here to debate the multiplicity or unity of modular robotic systems! That’s for the folks at the IEEE International Conference on Robotics and Automation, where this paper was presented (and Spectrum got the first look).

SMORES-EP is the robot in play here, and the researchers have given it a specific breadth of knowledge. It knows how to navigate its environment, but also how to inspect it with its little mast-cam and from that inspection derive meaningful data like whether an object can be rolled over, or a gap can be crossed.

It also knows how to interact with certain objects, and what they do; for instance, it can use its built-in magnets to pull open a drawer, and it knows that a ramp can be used to roll up to an object of a given height or lower.

A high-level planning system directs the robots/robot-parts based on knowledge that isn’t critical for any single part to know. For example, given the instruction to find out what’s in a drawer, the planner understands that to accomplish that, the drawer needs to be open; for it to be open, a magnet-bot will have to attach to it from this or that angle, and so on. And if something else is necessary, for example a ramp, it will direct that to be placed as well.

The experiment shown in this video has the robot system demonstrating how this could work in a situation where the robot must accomplish a high-level task using this limited but surprisingly complex body of knowledge.

In the video, the robot is told to check the drawers for certain objects. In the first drawer, the target objects aren’t present, so it must inspect the next one up. But it’s too high — so it needs to get on top of the first drawer, which luckily for the robot is full of books and constitutes a ledge. The planner sees that a ramp block is nearby and orders it to be put in place, and then part of the robot detaches to climb up and open the drawer, while the other part maneuvers into place to check the contents. Target found!

In the next task, it must cross a gap between two desks. Fortunately, someone left the parts of a bridge just lying around. The robot puts the bridge together, places it in position after checking the scene, and sends its forward half rolling towards the goal.

These cases may seem rather staged, but this isn’t about the robot itself and its ability to tell what would make a good bridge. That comes later. The idea is to create systems that logically approach real-world situations based on real-world data and solve them using real-world objects. Being able to construct a bridge from scratch is nice, but unless you know what a bridge is for, when and how it should be applied, where it should be carried and how to get over it, and so on, it’s just a part in search of a whole.

Likewise, many a robot with a perfectly good drawer-pulling hand will have no idea that you need to open a drawer before you can tell what’s in it, or that maybe you should check other drawers if the first doesn’t have what you’re looking for!

Such basic problem-solving is something we take for granted, but nothing can be taken for granted when it comes to robot brains. Even in the experiment described above, the robot failed multiple times for multiple reasons while attempting to accomplish its goals. That’s okay — we all have a little room to improve.