Gadgets – TechCrunch Devin Coldewey

Most mornings, after sifting through the night’s mail haul and skimming the headlines, I make myself a cup of coffee. I use a simple pour-over cone and paper filters, and (in what is perhaps my most tedious Seattleite affectation), I grind the beans by hand. I like the manual aspect of it all. Which is why this robotic pour-over machine is to me so perverse… and so tempting.

Called the Automatica, this gadget, currently raising funds on Kickstarter but seemingly complete as far as development and testing, is basically a way to do pour-over coffee without holding the kettle yourself.

You fill the kettle and place your mug and cone on the stand in front of it. The water is brought to a boil and the kettle tips automatically. Then the whole mug-and-cone portion spins slowly, distributing the water around the grounds, stopping after 11 ounces has been distributed over the correct duration. You can use whatever cone and mug you want as long as they’re about the right size.

Of course, the whole point of pour-over coffee is that it’s simple: you can do it at home, while on vacation, while hiking, or indeed at a coffee shop with a bare minimum of apparatus. All you need is the coffee beans, the cone, a paper filter — although some cones omit even that — and of course a receptacle for the product. (It’s not the simplest — that’d be Turkish, but that’s coffee for werewolves.)

Why should anyone want to disturb this simplicity? Well, the same reason we have the other 20 methods for making coffee: convenience. And in truth, pour-over is already automated in the form of drip machines. So the obvious next question is, why this dog and pony show of an open-air coffee bot?

Aesthetics! Nothing wrong with that. What goes on in the obscure darkness of a drip machine? No one knows. But this – this you can watch, audit, understand. Even if the machinery is complex, the result is simple: hot water swirls gently through the grounds. And although it’s fundamentally a bit absurd, it is a good-looking machine, with wood and brass accents and a tasteful kettle shape. (I do love a tasteful kettle.)

The creators say the machine is built to last “generations,” a promise which must of course be taken with a grain of salt. Anything with electronics has the potential to short out, to develop a bug, to be troubled by humidity or water leaks. The heating element may fail. The motor might stutter or a hinge catch.

But all that is true of most coffee machines, and unlike those this one appears to be made with care and high quality materials. The cracking and warping you can expect in thin molded plastic won’t happen to this thing, and if you take care of it it should at least last several years.

And it better, for the minimum pledge price that gets you a machine: $450. That’s quite a chunk of change. But like audiophiles, coffee people are kind of suckers for a nice piece of equipment.

There is of course the standard crowdfunding caveat emptor; this isn’t a pre-order but a pledge to back this interesting hardware startup, and if it’s anything like the last five or six campaigns I’ve backed, it’ll arrive late after facing unforeseen difficulties with machining, molds, leaks, and so on.

Gadgets – TechCrunch Devin Coldewey

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Gadgets – TechCrunch Devin Coldewey

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Gadgets – TechCrunch Devin Coldewey

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.

That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)

On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.

In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.

To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.

It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.

The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math:

16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.

That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.

The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.

Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.

One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.

The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.

Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.

Gadgets – TechCrunch Devin Coldewey

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so that the heat shield is facing inwards and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation, and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, but not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that 7 more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last 7 years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Gadgets – TechCrunch Natasha Lomas

Elvie, a femtech hardware startup whose first product is a sleek smart pelvic floor exerciser, has inked a strategic partnership with the UK’s National Health Service that will make the device available nationwide through the country’s free-at-the-point-of-use healthcare service so at no direct cost to the patient.

It’s a major win for the startup that was co-founded in 2013 by CEO Tania Boler and Jawbone founder, Alexander Asseily, with the aim of building smart technology that focuses on women’s issues — an overlooked and underserved category in the gadget space.

Boler’s background before starting Elvie (née Chiaro) including working for the U.N. on global sex education curriculums. But her interest in pelvic floor health, and the inspiration for starting Elvie, began after she had a baby herself and found there was more support for women in France than the U.K. when it came to taking care of their bodies after giving birth.

With the NHS partnership, which is the startup’s first national reimbursement partnership (and therefore, as a spokeswoman puts it, has “the potential to be transformative” for the still young company), Elvie is emphasizing the opportunity for its connected tech to help reduce symptoms of urinary incontinence, including those suffered by new mums or in cases of stress-related urinary incontinence.

The Elvie kegel trainer is designed to make pelvic floor exercising fun and easy for women, with real-time feedback delivered via an app that also gamifies the activity, guiding users through exercises intended to strengthen their pelvic floor and thus help reduce urinary incontinence symptoms. The device can also alert users when they are contracting incorrectly.

Elvie cites research suggesting the NHS spends £233M annually on incontinence, claiming also that around a third of women and up to 70% of expectant and new mums currently suffer from urinary incontinence. In 70 per cent of stress urinary incontinence cases it suggests symptoms can be reduced or eliminated via pelvic floor muscle training.

And while there’s no absolute need for any device to perform the necessary muscle contractions to strengthen the pelvic floor, the challenge the Elvie Trainer is intended to help with is it can be difficult for women to know they are performing the exercises correctly or effectively.

Elvie cites a 2004 study that suggests around a third of women can’t exercise their pelvic floor correctly with written or verbal instruction alone. Whereas it says that biofeedback devices (generally, rather than the Elvie Trainer specifically) have been proven to increase success rates of pelvic floor training programmes by 10% — which it says other studies have suggested can lower surgery rates by 50% and reduce treatment costs by £424 per patient head within the first year.

“Until now, biofeedback pelvic floor training devices have only been available through the NHS for at-home use on loan from the patient’s hospital, with patient allocation dependent upon demand. Elvie Trainer will be the first at-home biofeedback device available on the NHS for patients to keep, which will support long-term motivation,” it adds.

Commenting in a statement, Clare Pacey, a specialist women’s health physiotherapist at Kings College Hospital, said: “I am delighted that Elvie Trainer is now available via the NHS. Apart from the fact that it is a sleek, discreet and beautiful product, the app is simple to use and immediate visual feedback directly to your phone screen can be extremely rewarding and motivating. It helps to make pelvic floor rehabilitation fun, which is essential in order to be maintained.”

Elvie is not disclosing commercial details of the NHS partnership but a spokeswoman told us the main objective for this strategic partnership is to broaden access to Elvie Trainer, adding: “The wholesale pricing reflects that.”

Discussing the structure of the supply arrangement, she said Elvie is working with Eurosurgical as its delivery partner — a distributor she said has “decades of experience supplying products to the NHS”.

“The approach will vary by Trust, regarding whether a unit is ordered for a particular patient or whether a small stock will be held so a unit may be provided to a patient within the session in which the need is established. This process will be monitored and reviewed to determine the most efficient and economic distribution method for the NHS Supply Chain,” she added.

Gadgets – TechCrunch Devin Coldewey

Last year I had a good time comparing Sony’s DPT-RP1 with the home-grown reMarkable. They both had their strengths and weaknesses, and one of the Sony’s was that the thing was just plain big. They’ve remedied that with a much smaller sibling, the DPT-CP1, and it’s just as useful as I expected. Which is to say: in a very specific way.

Sony’s e-paper tablets are single-minded little gadgets: all they do is let you read and lightly mark up PDFs. If that sounds a mite too limited to you, you’re not the target demographic. But lots of people — including me — have to wade through tons of PDFs and it’s a pain to do so on a desktop or laptop. Who wants to read  Amazon’s Antitrust Paradox by hitting the down arrow 500 times?

For legal documents and scientific journal articles, which I read a lot of, a big e-paper tablet is fantastic. But the truth is that the RP1, with its 13.3″ screen, was simply too big to carry around most of the time. The device is quite light, but took up too much space. So I was excited to check out the CP1, which really is just a smaller version of the same thing.

To be honest, there’s not much I can add to my original review of the RP1: it handles PDFs easily, and now with improved page jumping and tagging, it’s easier to navigate them. And using the stylus, you can make some limited markup — but don’t try to do much more than mark a passage with an “OK” or a little star (one of several symbols the device recognizes and tracks the location of).

It’s incredibly light and thin, and feels flexible and durable as well — not a fragile device at all. Its design is understated and functional.

[gallery ids="1687652,1687655,1687656,1687657,1687653,1687654"]

Writing isn’t the Sony tablets’ strong suit — that would be the reMarkable’s territory. While looping out a circle or striking through a passage is just fine, handwritten notes are a pain. The resolution, accuracy and latency of the writing implement are as far as I can tell exactly as they were on the larger Sony tablet, which makes sense — the CP1 basically is a cutout of the same display and guts.

PDFs display nicely, and the grid pattern on the screen isn’t noticeable for the most part. Contrast isn’t as good as the latest Kindles or Kobos (shots in the gallery above aren’t really flattering, since they’re so close up, but you get the idea), but it’s more than adequate and it beats reading a big PDF on a small screen like those on your laptop’s LCD. Battery life is excellent — it’ll go through hundreds of pages on a charge.

A new mobile app supposedly makes transferring documents to the CP1 easy, but in reality I never found a reason to use it. I so rarely download PDFs — the only format the tablet reads — on my phone or tablet that it just didn’t make sense for me. Perhaps I could swap a few over that are already on my iPad, but it just didn’t strike me as particularly practical except perhaps in a few situations where my computer isn’t available. But that’s just me — people who work more from their phones might find this much more useful.

Mainly I just enjoyed how light and simple the thing is. There’s almost no menu system to speak of and the few functions you can do (zooming in and such) are totally straightforward. Whenever I got a big document, like today’s FCC OIG report, or a set of upcoming scientific papers, my first thought was, “I’ll stick these on the Sony and read them on the couch.”

Although I value its simplicity, it really could use a bit more functionality. A note-taking app that works with a Bluetooth keyboard, for instance, or synchronizing with your Simplenote or Pocket account. The reMarkable is still limited as well, but its excellent stylus (suitable for sketching) and cloud service help justify the price.

I have to send this thing back now, which is a shame because it’s definitely a handy device. Of course, the $600 price tag makes it rather a niche one as well — but perhaps it’s the kind of thing that fills out the budget of an IT upgrade or grant proposal.

Gadgets – TechCrunch Natasha Lomas

Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela at the weekend.

The Venezuelan government claimed three days ago that an attempt had been made to assassination president Maduro using two drones loaded with explosives. The president had been giving a speech at the time which was being broadcast live on television when the incident occurred.

Initial video from a state-owned television network showed the reaction of Maduro, those around him and a parade of soldiers at the event to what appeared to be two blasts somewhere off camera. But the footage did not include shots of any drones or explosions.

News organization AP also reported that firefighters at scene had shed doubt on the drone attack claim — suggesting there had instead been a gas explosion in a nearby flat.

Since then more footage has emerged, including videos purporting to show a drone exploding and a drone tumbling alongside a building.

Bellingcat has carried out an analysis of publicly available information related to the attack, including syncing timings of the state broadcast of Maduro’s speech, and using frame-by-frame analysis combined with photos and satellite imagery of Caracus to try to pinpoint locations of additional footage that has emerged to determine whether the drone attack claim stands up.

The Venezuelan government has claimed the drones used were DJI Matrice 600s, each carrying approximately 1kg of C4 plastic explosive and, when detonated, capable of causing damage at a radius of around 50 meters.

DJI Matrice 600 drones are a commercial model, normally used for industrial work — with a U.S. price tag of around $5,000 apiece, suggesting the attack could have cost little over $10k to carry out — with 1kg of plastic explosive available commercially (for demolition purposes) at a cost of around $30.

Bellingcat says its analysis supports the government’s claim that the drone model used was a DJI Matrice 600, noting that the drones involved in the event each had six rotors. It also points to a photo of drone wreckage which appears to show the distinctive silver rotor tip of the model, although it also notes the drones appear to have had their legs removed.

Venezuela’s interior minister, Nestor Reverol, also claimed the government thwarted the attack using “special techniques and [radio] signal inhibitors”, which “disoriented” the drone that detonated closest to the presidential stand — a capability Bellingcat notes the Venezuelan security services are reported to have.

The second drone was said by Reverol to have “lost control” and crashed into a nearby building.

Bellingcat says it is possible to geolocate the video of the falling drone to the same location as the fire in the apartment that firefighters had claimed was caused by a gas canister explosion. It adds that images taken of this location during the fire show a hole in the wall of the apartment in the vicinity of where the drone would have crashed.

“It is a very likely possibility that the downed drone subsequently detonated, creating the hole in the wall of this apartment, igniting a fire, and causing the sound of the second explosion which can be heard in Video 2 [of the state TV broadcast of Maduro’s speech],” it further suggests.

Here’s its conclusion:

From the open sources of information available, it appears that an attack took place using two DBIEDs while Maduro was giving a speech. Both the drones appear visually similar to DJI Matrice 600s, with at least one displaying features that are consistent with this model. These drones appear to have been loaded with explosive and flown towards the parade.

The first drone detonated somewhere above or near the parade, the most likely cause of the casualties announced by the Venezuelan government and pictured on social media. The second drone crashed and exploded approximately 14 seconds later and 400 meters away from the stage, and is the most likely cause of the fire which the Venezuelan firefighters described.

It also considers the claim of attribution by a group on social media, calling itself “Soldados de Franelas” (aka ‘T-Shirt Soldiers’ — a reference to a technique used by protestors wrapping a t-shirt around their head to cover their face and protect their identity), suggesting it’s not clear from the group’s Twitter messages that they are “unequivocally claiming responsibility for the event”, owing to use of passive language, and to a claim that the drones were shot down by government snipers — which it says “does not appear to be supported by the open source information available”.