Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.
The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.
It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.
At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.
In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.
Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.
That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.
This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.
Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?
University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.
“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.
The robot is similar to the bizarre-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.
A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting.
“The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told IEEE. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.”
The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix.
Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots.
The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor.
The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.
Professor Hiroshi Ishiguro makes robots in Osaka. His latest robot, Ibuki, is one for the nightmare catalog: it’s a robotic 10-year-old boy that can move on little tank treads and has soft, rubbery face and hands.
The robot has complete vision routes that can scan for faces and it has a sort of half-track system for moving around. It has “involuntary” motions like blinking and little head bobs but is little more than a proof-of-concept right now, especially considering its weird robo-skull is transparent.
“An Intelligent Robot Infrastructure is an interaction-based infrastructure. By interacting with robots, people can establish nonverbal communications with the artificial systems. That is, the purpose of a robot is to exist as a partner and to have valuable interactions with people,” wrote Ishiguro. “Our objective is to develop technologies for the new generation information infrastructures based on Computer Vision, Robotics and Artificial Intelligence.”
Ishiguro is a roboticist who plays on the borders of humanity. He made a literal copy of himself in 2010. His current robots are even more realistic and Ibuki’s questing face and delicate hands are really very cool. That said, expect those soft rubber hands to one day close around your throat when the robots rise up to take back what is theirs. Good luck, humans!
Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela at the weekend.
The Venezuelan government claimed three days ago that an attempt had been made to assassination president Maduro using two drones loaded with explosives. The president had been giving a speech at the time which was being broadcast live on television when the incident occurred.
Initial video from a state-owned television network showed the reaction of Maduro, those around him and a parade of soldiers at the event to what appeared to be two blasts somewhere off camera. But the footage did not include shots of any drones or explosions.
News organization AP also reported that firefighters at scene had shed doubt on the drone attack claim — suggesting there had instead been a gas explosion in a nearby flat.
Since then more footage has emerged, including videos purporting to show a drone exploding and a drone tumbling alongside a building.
Bellingcat has carried out an analysis of publicly available information related to the attack, including syncing timings of the state broadcast of Maduro’s speech, and using frame-by-frame analysis combined with photos and satellite imagery of Caracus to try to pinpoint locations of additional footage that has emerged to determine whether the drone attack claim stands up.
The Venezuelan government has claimed the drones used were DJI Matrice 600s, each carrying approximately 1kg of C4 plastic explosive and, when detonated, capable of causing damage at a radius of around 50 meters.
DJI Matrice 600 drones are a commercial model, normally used for industrial work — with a U.S. price tag of around $5,000 apiece, suggesting the attack could have cost little over $10k to carry out — with 1kg of plastic explosive available commercially (for demolition purposes) at a cost of around $30.
Bellingcat says its analysis supports the government’s claim that the drone model used was a DJI Matrice 600, noting that the drones involved in the event each had six rotors. It also points to a photo of drone wreckage which appears to show the distinctive silver rotor tip of the model, although it also notes the drones appear to have had their legs removed.
Venezuela’s interior minister, Nestor Reverol, also claimed the government thwarted the attack using “special techniques and [radio] signal inhibitors”, which “disoriented” the drone that detonated closest to the presidential stand — a capability Bellingcat notes the Venezuelan security services are reported to have.
The second drone was said by Reverol to have “lost control” and crashed into a nearby building.
Bellingcat says it is possible to geolocate the video of the falling drone to the same location as the fire in the apartment that firefighters had claimed was caused by a gas canister explosion. It adds that images taken of this location during the fire show a hole in the wall of the apartment in the vicinity of where the drone would have crashed.
“It is a very likely possibility that the downed drone subsequently detonated, creating the hole in the wall of this apartment, igniting a fire, and causing the sound of the second explosion which can be heard in Video 2 [of the state TV broadcast of Maduro’s speech],” it further suggests.
Here’s its conclusion:
From the open sources of information available, it appears that an attack took place using two DBIEDs while Maduro was giving a speech. Both the drones appear visually similar to DJI Matrice 600s, with at least one displaying features that are consistent with this model. These drones appear to have been loaded with explosive and flown towards the parade.
The first drone detonated somewhere above or near the parade, the most likely cause of the casualties announced by the Venezuelan government and pictured on social media. The second drone crashed and exploded approximately 14 seconds later and 400 meters away from the stage, and is the most likely cause of the fire which the Venezuelan firefighters described.
It also considers the claim of attribution by a group on social media, calling itself “Soldados de Franelas” (aka ‘T-Shirt Soldiers’ — a reference to a technique used by protestors wrapping a t-shirt around their head to cover their face and protect their identity), suggesting it’s not clear from the group’s Twitter messages that they are “unequivocally claiming responsibility for the event”, owing to use of passive language, and to a claim that the drones were shot down by government snipers — which it says “does not appear to be supported by the open source information available”.
Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.
The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.
Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.
The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.
“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”
The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.
So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.
There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.
“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”
Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.
Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.
Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.
The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)
In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.
They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.
The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.
What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.
This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.
Consumers using drones in the UK have new safety restrictions they must obey from today, with a change to the law prohibiting drones from being flown above 400ft or within 1km of an airport boundary.
Anyone caught flouting the new restrictions could be charged with recklessly or negligently acting in a manner likely to endanger an aircraft or a person in an aircraft — which carries a penalty of up to five years in prison or an unlimited fine, or both.
The safety restrictions were announced by the government in May, and have been brought in via an amendment the 2016 Air Navigation Order.
They’re a stop-gap because the government has also been working on a full drone bill — which was originally slated for Spring but has been delayed.
However the height and airport flight restrictions for drones were pushed forward, given the clear safety risks — after a year-on-year increase in reports of drone incidents involving aircraft.
The Civil Aviation Authority has today published research to coincide with the new laws, saying it’s found widespread support among the public for safety regulations for drones.
Commenting in a statement, the regulator’s assistant director Jonathan Nicholson said: “Drones are here to stay, not only as a recreational pastime, but as a vital tool in many industries — from agriculture to blue-light services — so increasing public trust through safe drone flying is crucial.”
“As recreational drone use becomes increasingly widespread across the UK it is heartening to see that awareness of the Dronecode has also continued to rise — a clear sign that most drone users take their responsibility seriously and are a credit to the community,” he added, referring to the (informal) set of rules developed by the body to promote safe use of consumer drones — ahead of the government legislating.
Additional measures the government has confirmed it will legislate for — announced last summer — include a requirement for owners of drones weighing 250 grams or more to register with the CAA, and for drone pilots to take an online safety test. The CAA says these additional requirements will be enforced from November 30, 2019 — with more information on the registration scheme set to follow next year.
For now, though, UK drone owners just need to make sure they’re not flying too high or too close to airports.
Earlier this month it emerged the government is considering age restrictions on drone use too. Though it remains to be seen whether or not those proposals will make it into the future drone bill.
A pair of Canadian students making a simple, inexpensive prosthetic arm have taken home the grand prize at Microsoft’s Imagine Cup, a global startup competition the company holds yearly. SmartArm will receive $85,000, a mentoring session with CEO Satya Nadella, and some other Microsoft goodies. But they were far from the only worthy team from the dozens that came to Redmond to compete.
The Imagine Cup is an event I personally look forward to, because it consists entirely of smart young students, usually engineers and designers themselves (not yet “serial entrepreneurs”) and often aiming to solve real-world problems.
In the semi-finals I attended, I saw a pair of young women from Pakistan looking to reduce stillbirth rates with a new pregnancy monitor, an automated eye-checking device that can be deployed anywhere and used by anyone, and an autonomous monitor for water tanks in drought-stricken areas. When I was their age, I was living at my mom’s house, getting really good at Mario Kart for SNES and working as a preschool teacher.
Even Nadella bowed before their ambitions in his appearance on stage at the final event this morning.
“Last night I was thinking, ‘What advice can I give people who have accomplished so much at such a young age?’ And I said, I should go back to when I was your age and doing great things. Then I realized…I definitely wouldn’t have made these finals.”
That got a laugh, but (with apologies to Nadella) it’s probably true. Students today have unbelievable resources available to them and as many of the teams demonstrated, they’re making excellent use of those resources.
SmartArm in particular combines a clever approach with state of the art tech in a way that’s so simple it’s almost ridiculous.
The issue they saw as needing a new approach is prosthetic arms, which as they pointed out are often either non-functional (think just a plastic arm or simple flexion-based gripper) or highly expensive (a mechanical arm might cost tens of thousands). Why can’t one be both?
Their solution is an extremely interesting and timely one: a relatively simply actuated 3D-printed forearm and hand that has its own vision system built in. A camera built into the palm captures an image of the item the user aims to pick up, and quickly classifies it — an apple, a key ring, a pen — and selects the correct grip for that object.
The user activates the grip by flexing their upper arm muscles, an action that’s detected by a Myo-like muscle sensor (possibly actually a Myo, but I couldn’t tell from the demo). It sends the signal to the arm to activate the hand movement, and the fingers move accordingly.
It’s still extremely limited — you likely can’t twist a doorknob with it, or reliably grip a knife or fork, and so on. But for many everyday tasks it could still be useful. And the idea of putting the camera in the palm is a high-risk, high-reward one. It is of course blocked when you pick up the item, but what does it need to see during that time? You deactivate the grip to put the cup down and the camera is exposed again to watch for the next task.
Bear in mind this is not meant as some kind of serious universal hand replacement. But it provides smart, simple functionality for people who might otherwise have had to use a pincer arm or the like. And according to the team, it should cost less than $100. How that’s possible to do including the arm sensor is unclear to me, but I’m not the one who built a bionic arm so I’m going to defer to them on this. Even if they miss that 50 percent it would still be a huge bargain, honestly.
There’s an optional subscription that would allow the arm to improve itself over time as it learns more about your habits and objects you encounter regularly — this would also conceivably be used to improve other SmartArms as well.
As for how it looks — rather robotic — the team defended it based on their own feedback from amputees: “They’d rather be asked, ‘hey, where did you get that arm?” than ‘what happened to your arm?’ ” But a more realistic-looking set of fingers is also under development.
The team said they were originally looking for venture funding but ended up getting a grant instead; they’ve got interest from a number of Canadian and American institutions already, and winning the Imagine Cup will almost certainly propel them to greater prominence in the field.
My own questions would be on durability, washing, and the kinds of things that really need to be tested in real-world scenarios. What if the camera lens gets dirty or scratched? Will there be color options for people that don’t want to have white “skin” on their arm? What’s the support model? What about insurance?
SmartArm takes the grand prize, but the runners up and some category winners get a bunch of good stuff too. I plan to get in touch with SmartArm and several other teams from the competition to find out more and hear about their progress. I was really quite impressed not just with the engineering prowess but the humanitarianism and thoughtfulness on display this year. Nadella summed it up best:
“One of the things that I always think about is this competition in some sense ups the game, right?” he said at the finals. “People from all over the world are thinking about how do I use technology, how do i learn new concepts, but then more importantly, how do I solve some of these unmet, unarticulated needs? The impact that you all can have is just enormous, the opportunity is enormous. But I also believe there is an amazing sense of responsibility, or a need for responsibility that we all have to collectively exercise given the opportunity we have been given.”
A UK government backed drone innovation project that’s exploring how unmanned aerial vehicles could benefit cities — including for use-cases such as medical delivery, traffic incident response, fire response and construction and regeneration — has reported early learnings from the first phase of the project.
Five city regions are being used as drone test-beds as part of Nesta’s Flying High Challenge — namely London, the West Midlands, Southampton, Preston and Bradford.
While five socially beneficial use-cases for drone technology have been analyzed as part of the project so far, including considering technical, social and economic implications of the tech.
The project has been ongoing since December.
Nesta, the innovation-focused charity behind the project and the report, wants the UK to become a global leader in shaping drone systems that place people’s needs first, and writes in the report that: “Cities must shape the future of drones: Drones must not shape the future of cities.”
In the report it outlines some of the challenges facing urban implementations of drone technology and also makes some policy recommendations.
It also says that socially beneficial use-cases have come out as an early winner over of cities to the potential of the tech — over and above “commercial or speculative” applications such as drone delivery or for carrying people in flying taxis.
The five use-cases explored thus far via the project are:
Medical delivery within London — a drone delivery network for carrying urgent medical products between NHS facilities, which would routinely carry products such as pathology samples, blood products and equipment over relatively short distances between hospitals in a network
Traffic incident response in the West Midlands — responding to traffic incidents in the West Midlands to support the emergency services prior to their arrival and while they are on-site, allowing them to allocate the right resources and respond more effectively
Fire response in Bradford — emergency response drones for West Yorkshire Fire and Rescue service. Drones would provide high-quality information to support emergency call handlers and fire ground commanders, arriving on the scene faster than is currently possible and helping staff plan an appropriate response for the seriousness of the incident
Construction and regeneration in Preston — drone services supporting construction work for urban projects. This would involve routine use of drones prior to and during construction, in order to survey sites and gather real-time information on the progress of works
Medical delivery across the Solent — linking Southampton across the Solent to the Isle of Wight using a delivery drone. Drones could carry light payloads of up to a few kilos over distances of around 20 miles, with medical deliveries of products being a key benefit
Flagging up technical and regulatory challenges to scaling the use of drones beyond a few interesting experiments, Nest writes: “In complex environments, flight beyond the operator’s visual line of sight, autonomy and precision flight are key, as is the development of an unmanned traffic management (UTM) system to safely manage airspace. In isolation these are close to being solved — but making these work at large scale in a complex urban environment is not.”
“While there is demand for all of the use cases that were investigated, the economics of the different use cases vary: Some bring clear cost savings; others bring broader social benefits. Alongside technological development, regulation needs to evolve to allow these use cases to operate. And infrastructure like communications networks and UTM systems will need to be built,” it adds.
The report also emphasizes the importance of public confidence, writing that: “Cities are excited about the possibilities that drones can bring, particularly in terms of critical public services, but are also wary of tech-led buzz that can gloss over concerns of privacy, safety and nuisance. Cities want to seize the opportunity behind drones but do it in a way that responds to what their citizens demand.”
And the charity makes an urgent call for the public to be brought into discussions about the future of drones.
“So far the general public has played very little role,” it warns. “There is support for the use of drones for public benefit such as for the emergency services. In the first instance, the focus on drone development should be on publicly beneficial use cases.”
Giving the combined (and intertwined) complexity of regulatory, technical and infrastructure challenges standing in the way of developing viable drone service implementations, Nesta is also recommending the creation of testbeds in which drone services can be developed with the “facilities and regulatory approvals to support them”.
“Regulation will also need to change: Routine granting of permission must be possible, blanket prohibitions in some types of airspace must be relaxed, and an automated system of permissions — linked to an unmanned traffic management system — needs to be put in place for all but the most challenging uses. And we will need a learning system to share progress on regulation and governance of the technology, within the UK and beyond, for instance with Eurocontrol,” it adds.
“Finally, the UK will need to invest in infrastructure, whether this is done by the public or private sector, to develop the communications and UTM infrastructure required for widespread drone operation.”
In conclusion Nesta argues there is “clear evidence that drones are an opportunity for the UK” — pointing to the “hundreds” of companies already operating in the sector; and to UK universities with research strengths in the area; as well as suggesting public authorities could save money or provide “new and better services thanks to drones”.
At the same time it warns that UK policy responses to drones are lagging those of “leading countries” — suggesting the country could squander the chance to properly develop some early promise.
“The US, EU, China, Switzerland and Singapore in particular have taken bigger steps towards reforming regulations, creating testbeds and supporting businesses with innovative ideas. The prize, if we get this right, is that we shape this new technology for good — and that Britain gets its share of the economic spoils.”