Gadgets – TechCrunch John Biggs

In a truly fascinating exploration into two smart speakers – the Sonos One and the Amazon Echo – BoltVC’s Ben Einstein has found some interesting differences in the way a traditional speaker company and an infrastructure juggernaut look at their flagship devices.

The post is well worth a a full read but the gist is this: Sonos, a very traditional speaker company, has produced a good speaker and modified its current hardware to support smart home features like Alexa and Google Assistant. The Sonos One, notes Einstein, is a speaker first and smart hardware second.

“Digging a bit deeper, we see traditional design and manufacturing processes for pretty much everything. As an example, the speaker grill is a flat sheet of steel that’s stamped, rolled into a rounded square, welded, seams ground smooth, and then powder coated black. While the part does look nice, there’s no innovation going on here,” he writes.

The Amazon Echo, on the other hand, looks like what would happen if an engineer was given an unlimited budget and told to build something that people could talk to. The design decisions are odd and intriguing and it is ultimately less a speaker than a home conversation machine. Plus it is very expensive to make.

Pulling off the sleek speaker grille, there’s a shocking secret here: this is an extruded plastic tube with a secondary rotational drilling operation. In my many years of tearing apart consumer electronics products, I’ve never seen a high-volume plastic part with this kind of process. After some quick math on the production timelines, my guess is there’s a multi-headed drill and a rotational axis to create all those holes. CNC drilling each hole individually would take an extremely long time. If anyone has more insight into how a part like this is made, I’d love to see it! Bottom line: this is another surprisingly expensive part.

Sonos, which has been making a form of smart speaker for fifteen years, is a CE company with cachet. Amazon, on the other hand, sees its devices as a way into living rooms and a delivery system for sales and is fine with licensing its tech before making its own. Therefore to compare the two is a bit disingenuous. Einstein’s thesis that Sonos’ trajectory is troubled by the fact that it depends on linear and closed manufacturing techniques while Amazon spares no expense to make its products is true. But Sonos makes speakers that work together amazingly well. They’ve done this for a decade and a half. If you compare their products – and I have – with competing smart speakers an non-audiophile “dumb” speakers you will find their UI, UX, and sound quality surpass most comers.

Amazon makes things to communicate with Amazon. This is a big difference.

Where Einstein is correct, however, is in his belief that Sonos is at a definite disadvantage. Sonos chases smart technology while Amazon and Google (and Apple, if their HomePod is any indication) lead. That said, there is some value to having a fully-connected set of speakers with add-on smart features vs. having to build an entire ecosystem of speaker products that can take on every aspect of the home theatre.

On the flip side Amazon, Apple, and Google are chasing audio quality while Sonos leads. While we can say that in the future we’ll all be fine with tinny round speakers bleating out Spotify in various corners of our room, there is something to be said for a good set of woofers. Whether this nostalgic love of good sound survives this generation’s tendency to watch and listen to low resolution media is anyone’s bet, but that’s Amazon’s bet to lose.

Ultimately Sonos is strong and fascinating company. An upstart that survived the great CE destruction wrought by Kickstarter and Amazon, it produces some of the best mid-range speakers I’ve used. Amazon makes a nice – almost alien – product, but given that it can be easily copied and stuffed into a hockey puck that probably costs less than the entire bill of materials for the Amazon Echo it’s clear that Amazon’s goal isn’t to make speakers.

Whether the coming Sonos IPO will be successful depends partially on Amazon and Google playing ball with the speaker maker. The rest depends on the quality of product and the dedication of Sonos users. This good will isn’t as valuable as a signed contract with major infrastructure players but Sonos’ good will is far more than Amazon and Google have with their popular but potentially intrusive product lines. Sonos lives in the home while Google and Amazon want to invade it. That is where Sonos wins.

Gadgets – TechCrunch Sarah Wells

With long summer evenings comes the perfect opportunity to dust off your old boxes of circuits and wires and start to build something. If you’re short on inspiration, you might be interested in artist and engineer Dan Macnish’s how-to guide on building an AI-powered doodle camera using a thermal printer, Raspberry pi, a dash of Python and Google’s Quick Draw data set.

“Playing with neural networks for object recognition one day, I wondered if I could take the concept of a Polaroid one step further, and ask the camera to re-interpret the image, printing out a cartoon instead of a faithful photograph.” Macnish wrote on his blog about the project, called Draw This.

To make this work, Macnish drew on Google’s object recognition neural network and the data set created for the game Google Quick, Draw! Tying the two systems together with some python code, Macnish was able to have his creation recognize real images and print out the best corresponding doodle in the Quick, Draw! data set

But since output doodles are limited to the data set, there can be some discrepancy between what the camera “sees” and what it generates for the photo.

“You point and shoot – and out pops a cartoon; the camera’s best interpretation of what it saw,” Macnish writes. “The result is always a surprise. A food selfie of a healthy salad might turn into an enormous hot dog.”

If you want to give this a go for yourself, Macnish has uploaded the instructions and code needed to build this project on GitHub.

Gadgets – TechCrunch Devin Coldewey

A robot’s got to know its limitations. But that doesn’t mean it has to accept them. This one in particular uses tools to expand its capabilities, commandeering nearby items to construct ramps and bridges. It’s satisfying to watch but, of course, also a little worrying.

This research, from Cornell and the University of Pennsylvania, is essentially about making a robot take stock of its surroundings and recognize something it can use to accomplish a task that it knows it can’t do on its own. It’s actually more like a team of robots, since the parts can detach from one another and accomplish things on their own. But you didn’t come here to debate the multiplicity or unity of modular robotic systems! That’s for the folks at the IEEE International Conference on Robotics and Automation, where this paper was presented (and Spectrum got the first look).

SMORES-EP is the robot in play here, and the researchers have given it a specific breadth of knowledge. It knows how to navigate its environment, but also how to inspect it with its little mast-cam and from that inspection derive meaningful data like whether an object can be rolled over, or a gap can be crossed.

It also knows how to interact with certain objects, and what they do; for instance, it can use its built-in magnets to pull open a drawer, and it knows that a ramp can be used to roll up to an object of a given height or lower.

A high-level planning system directs the robots/robot-parts based on knowledge that isn’t critical for any single part to know. For example, given the instruction to find out what’s in a drawer, the planner understands that to accomplish that, the drawer needs to be open; for it to be open, a magnet-bot will have to attach to it from this or that angle, and so on. And if something else is necessary, for example a ramp, it will direct that to be placed as well.

The experiment shown in this video has the robot system demonstrating how this could work in a situation where the robot must accomplish a high-level task using this limited but surprisingly complex body of knowledge.

In the video, the robot is told to check the drawers for certain objects. In the first drawer, the target objects aren’t present, so it must inspect the next one up. But it’s too high — so it needs to get on top of the first drawer, which luckily for the robot is full of books and constitutes a ledge. The planner sees that a ramp block is nearby and orders it to be put in place, and then part of the robot detaches to climb up and open the drawer, while the other part maneuvers into place to check the contents. Target found!

In the next task, it must cross a gap between two desks. Fortunately, someone left the parts of a bridge just lying around. The robot puts the bridge together, places it in position after checking the scene, and sends its forward half rolling towards the goal.

These cases may seem rather staged, but this isn’t about the robot itself and its ability to tell what would make a good bridge. That comes later. The idea is to create systems that logically approach real-world situations based on real-world data and solve them using real-world objects. Being able to construct a bridge from scratch is nice, but unless you know what a bridge is for, when and how it should be applied, where it should be carried and how to get over it, and so on, it’s just a part in search of a whole.

Likewise, many a robot with a perfectly good drawer-pulling hand will have no idea that you need to open a drawer before you can tell what’s in it, or that maybe you should check other drawers if the first doesn’t have what you’re looking for!

Such basic problem-solving is something we take for granted, but nothing can be taken for granted when it comes to robot brains. Even in the experiment described above, the robot failed multiple times for multiple reasons while attempting to accomplish its goals. That’s okay — we all have a little room to improve.

Gadgets – TechCrunch Romain Dillet

French startup Snips has been working on voice assistant technology that respects your privacy. And the company is going to use its own voice assistant for a set of consumer devices. As part of this consumer push, the company is also announcing an initial coin offering.

Yes, it sounds a bit like Snips is playing a game of buzzword bingo. Anyone can currently download the open source Snips SDK and play with it with a Raspberry Pi, a microphone and a speaker. It’s private by design, you can even make it work without any internet connection. Companies can partner with Snips to embed a voice assistant in their own devices too.

But Snips is adding a B2C element to its business. This time, the company is going to compete directly with Amazon Echo and Google Home speakers. You’ll be able to buy the Snips AIR Base and Snips AIR Satellites.

The base will be a good old smart speaker, while satellites will be tiny portable speakers that you can put in all your rooms. The company plans to launch those devices in 18 months.

[gallery ids="1646039,1646040,1646041,1646042,1646043,1646044"]

By default, Snips devices will come with basic skills to control your smart home devices, get the weather, control music, timers, alarms, calendars and reminders. Unlike the Amazon Echo or Google Home, voice commands won’t be sent to Google’s or Amazon’s servers.

Developers will be able to create skills and publish them on a marketplace. That marketplace will run on a new blockchain — the AIR blockchain.

And that’s where the ICO comes along. The marketplace will accept AIR tokens to buy more skills. You’ll also be able to generate training data for voice commands using AIR tokens. To be honest, I’m not sure why good old credit card transactions weren’t enough. But I guess that’s a good way to raise money.

Gadgets – TechCrunch Devin Coldewey

The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much further away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar 6 seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision.

During these 6 seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car, or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing, and saw Herzberg, whom the car had known about in some way for 5 long seconds by then. It struck and killed her.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.

Uber offered the following statement on the report:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

Gadgets – TechCrunch Jordan Crook

Disrupt SF is set to be the biggest tech conference that TechCrunch has ever hosted. So it only makes sense that we plan an agenda fit for the occasion.

That’s why we’re absolutely thrilled to announce that Ring’s Jamie Siminoff will join us on stage for a fireside chat and Jason Mars from Clinc will be demo-ing first-of-its-kind technology on the Disrupt SF stage.

Jamie Siminoff – Ring

Earlier this year, Ring became Amazon’s second largest acquisition ever, selling to the behemoth for a reported $1 billion.

But the story begins long ago, with Jamie Siminoff building a WiFi-connected video doorbell in his garage in 2011. Back then it was called DoorBot. Now, it’s called Ring, and it’s an essential piece of the overall evolution of e-commerce.

As giants like Amazon move to make purchasing and receiving goods as simple as ever, safe and reliable entry into the home becomes critical to the mission. Ring, which has made neighborhood safety and home security its main priority since inception, is a capable partner in that mission.

Of course, one doesn’t often build a successful company and sell for $1 billion on their first go. Prior to Ring, Siminoff founded PhoneTag, the world’s first voicemail-to-text company and Unsubscribe.com. Both of those companies were sold. Based on his founding portfolio alone, it’s clear that part of Siminoff’s success can be attributed to understanding what consumers need and executing on a solution.

Dr. Jason Mars – Clinc

AI has the potential to change everything, but there is a fundamental disconnect between what AI is capable of and how we interface with it. Clinc has tried to close that gap with its conversational AI, emulating human intelligence to interpret unstructured, unconstrained speech.

Clinc is currently targeting the financial market, letting users converse with their bank account using natural language without any pre-defined templates or hierarchical voice menus.

But there are far more applications for this kind of conversational tech. As voice interfaces like Alexa and Google Assistant pick up steam, there is clearly an opportunity to bring this kind of technology to all facets of our lives.

At Disrupt SF, Clinc’s founder and CEO Dr. Jason Mars plans to do just that, debuting other ways that Clinc’s conversational AI can be applied. Without ruining the surprise, let me just say that this is going to be a demo you won’t want to miss.

Tickets to Disrupt are available here.

Gadgets – TechCrunch Devin Coldewey

It’s not enough in this day and age that we have to deal with fake news, we also have to deal with fake prescription drugs, fake luxury goods, and fake Renaissance-era paintings. Sometimes all at once! IBM’s Verifier is a gadget and platform made (naturally) to instantly verify that something is what it claims to be, by inspecting it at a microscopic level.

Essentially you stick a little thing on your phone’s camera, open the app, and put the sensor against what you’re trying to verify, be it a generic antidepressant or an ore sample. By combining microscopy, spectroscopy, and a little bit of AI, the Verifier compares what it sees to a known version of the item and tells you whether they’re the same.

The key component in this process is an “optical element” that sits in front of the camera (it can be anything that takes a decent image) amounting to a specialized hyper-macro lens. It allows the camera to detect features as small as a micron — for comparison, a human hair is usually a few dozen microns wide.

At the micron level there are patterns and optical characteristics that aren’t visible to the human eye, like precisely which wavelengths of light it reflects. The quality of a weave, the number of flaws in a gem, the mixture of metals in an alloy… all stuff you or I would miss, but a machine learning system trained on such examples will pick out instantly.

For instance a counterfeit pill, although orange and smooth and imprinted just like a real one if one were to just look at it, will likely appear totally different at the micro level: textures and structures with a very distinct pattern, or at least distinct from the real thing — not to mention a spectral signature that’s probably way different. There’s also no reason it can’t be used on things like expensive wines or oils, contaminated water, currency, and plenty of other items.

IBM was eager to highlight the AI element, which is trained on the various patterns and differentiates between them, though as far as I can tell it’s a pretty straightforward classification task. I’m more impressed by the lens they put together that can resolve at a micron level with so little distortion and not exclude or distort the colors too much. It even works on multiple phones — you don’t have to have this or that model.

The first application IBM is announcing for its Verifier is as a part of the diamond trade, which is of course known for fetishizing the stones and their uniqueness, and also establishing elaborate supply trains to ensure product is carefully controlled. The Verifier will be used as an aide for grading stones, not on its own but as a tool for human checkers; it’s a partnership with the Gemological Institute of America, which will test integrating the tool into its own workflow.

By imaging the stone from several angles, the individual identity of the diamond can be recorded and tracked as well, so that its provenance and trail through the industry can be tracked over the years. Here IBM imagines blockchain will be useful, which is possible but not exactly a given.

It’ll be a while before you can have one of your own, but here’s hoping this type of tech becomes popular enough that you can check the quality or makeup of something at least without having to visit some lab.

Gadgets – TechCrunch Mike Butcher

Excited to announce that this year’s The Europas Unconference & Awards is shaping up! Our half day Unconference kicks off on 3 July, 2018 at The Brewery in the heart of London’s “Tech City” area, followed by our startup awards dinner and fantastic party and celebration of European startups!

The event is run in partnership with TechCrunch, the official media partner. Attendees, nominees and winners will get deep discounts to TechCrunch Disrupt in Berlin, later this year.
The Europas Awards are based on voting by expert judges and the industry itself. But key to the daytime is all the speakers and invited guests. There’s no “off-limits speaker room” at The Europas, so attendees can mingle easily with VIPs and speakers.

What exactly is an Unconference? We’re dispensing with the lectures and going straight to the deep-dives, where you’ll get a front row seat with Europe’s leading investors, founders and thought leaders to discuss and debate the most urgent issues, challenges and opportunities. Up close and personal! And, crucially, a few feet away from handing over a business card. The Unconference is focused into zones including AI, Fintech, Mobility, Startups, Society, and Enterprise and Crypto / Blockchain.

We’ve confirmed 10 new speakers including:


Eileen Burbidge, Passion Capital


Carlos Eduardo Espinal, Seedcamp


Richard Muirhead, Fabric Ventures


Sitar Teli, Connect Ventures


Nancy Fechnay, Blockchain Technologist + Angel


George McDonaugh, KR1


Candice Lo, Blossom Capital


Scott Sage, Crane Venture Partners


Andrei Brasoveanu, Accel


Tina Baker, Jag Shaw Baker

How To Get Your Ticket For FREE

We’d love for you to ask your friends to join us at The Europas – and we’ve got a special way to thank you for sharing.

Your friend will enjoy a 15% discount off the price of their ticket with your code, and you’ll get 15% off the price of YOUR ticket.

That’s right, we will refund you 15% off the cost of your ticket automatically when your friend purchases a Europas ticket.

So you can grab tickets here.

Vote for your Favourite Startups

Public Voting is still humming along. Please remember to vote for your favourite startups!

Awards by category:

Hottest Media/Entertainment Startup

Hottest E-commerce/Retail Startup

Hottest Education Startup

Hottest Startup Accelerator

Hottest Marketing/AdTech Startup

Hottest Games Startup

Hottest Mobile Startup

Hottest FinTech Startup

Hottest Enterprise, SaaS or B2B Startup

Hottest Hardware Startup

Hottest Platform Economy / Marketplace

Hottest Health Startup

Hottest Cyber Security Startup

Hottest Travel Startup

Hottest Internet of Things Startup

Hottest Technology Innovation

Hottest FashionTech Startup

Hottest Tech For Good

Hottest A.I. Startup

Fastest Rising Startup Of The Year

Hottest GreenTech Startup of The Year

Hottest Startup Founders

Hottest CEO of the Year

Best Angel/Seed Investor of the Year

Hottest VC Investor of the Year

Hottest Blockchain/Crypto Startup Founder(s)

Hottest Blockchain Protocol Project

Hottest Blockchain DApp

Hottest Corporate Blockchain Project

Hottest Blockchain Investor

Hottest Blockchain ICO (Europe)

Hottest Financial Crypto Project

Hottest Blockchain for Good Project

Hottest Blockchain Identity Project

Hall Of Fame Award – Awarded to a long-term player in Europe

The Europas Grand Prix Award (to be decided from winners)

The Awards celebrates the most forward thinking and innovative tech & blockchain startups across over some 30+ categories.

Startups can apply for an award or be nominated by anyone, including our judges. It is free to enter or be nominated.

What is The Europas?

Instead of thousands and thousands of people, think of a great summer event with 1,000 of the most interesting and useful people in the industry, including key investors and leading entrepreneurs.

• No secret VIP rooms, which means you get to interact with the Speakers

• Key Founders and investors speaking; featured attendees invited to just network

• Expert speeches, discussions, and Q&A directly from the main stage

• Intimate “breakout” sessions with key players on vertical topics

• The opportunity to meet almost everyone in those small groups, super-charging your networking

• Journalists from major tech titles, newspapers and business broadcasters

• A parallel Founders-only track geared towards fund-raising and hyper-networking

• A stunning awards dinner and party which honors both the hottest startups and the leading lights in the European startup scene

• All on one day to maximise your time in London. And it’s PROBABLY sunny!

europas8

That’s just the beginning. There’s more to come…

europas13

Interested in sponsoring the Europas or hosting a table at the awards? Or purchasing a table for 10 or 12 guest or a half table for 5 guests? Get in touch with:
Petra Johansson
Petra@theeuropas.com
Phone: +44 (0) 20 3239 9325

Gadgets – TechCrunch John Biggs

As a hater of all sports I am particularly excited about the imminent replacement of humans with robots in soccer. If this exciting match, the Standard Platform League (SPL) final of the German Open featuring the Nao-Team HTWK vs. Nao Devils, is any indication the future is going to be great.

The robots are all NAO robots by SoftBank and they are all designed according to the requirements of the Standard Platform League. The robots can run (sort of), kick (sort of), and lift themselves up if they fall. The 21 minute video is a bit of a slog and the spectators are definitely not drunk hooligans but darn if it isn’t great to see little robots hitting the turf to grab a ball before it hits the goal.

I, for one, welcome our soccer-playing robot overlords.

Gadgets – TechCrunch Devin Coldewey

Teradyne, a prosaic-sounding but flush company that provides automated testing equipment for industrial applications, has acquired the Danish robotics company MiR for an eye-popping $148 million, with $124 million on the table after meeting performance goals.

MiR, which despite the lowercase “i” stands for Mobile Industrial Robots, does what you might guess. Founded in 2013, the company has grown steadily and had a huge 2017, tripling its revenues to $12 million after its latest robot, the MiR200, received high marks from customers.

MiR’s robots are of the warehouse sort, wheeled little autonomous fellows that can lift and pull pallets, boxes, and so on. They look a bit like the little ones that are always underfoot in Star Wars movies. It’s a natural fit for Teradyne, especially with the latter’s recent purchase of the well known Universal Robotics in a $350 million deal in 2015.

Testing loads of electronics and components may be a dry business, but it’s a booming one, because the companies that test faster ship faster. Any time efficiencies can be made in the process, be it warehouse logistics or assisting expert humans in sensitive procedures, one can be sure a company will be willing to pay for them.

Teradyne also noted (the Robot Report points out) that both companies take a modern approach to robots and how they interact and must be trained by people — the old paradigm of robotics specialists having to carefully program these things doesn’t scale well, and both UR and MiR were forward thinking enough to improve that pain point.

The plan is, of course, to take MiR’s successful technology global, hopefully recreating its success on a larger scale.

“My main focus is to get our mobile robots out to the entire world,” said MiR CSO and founder Niels Jul Jacobsen in the press release announcing the acquisition. “With Teradyne as the owner, we will have strong backing to ensure MiR’s continued growth in the global market.”