TechFresh, Consumer Electronics Guide Isaiah

Fitbit Versa

Check out this newly announced fitness smartwatch ‘Versa’ from Fitbit. Compatible with iPhone (4S and later) and Android (4.3 and later), this water resistant smartwatch (up to 50 meters) is equipped with a 1.34-inch touchscreen color LCD display, a 2.5GB of internal storage (up to 300 songs) and a 145mAh battery (up to 4+ days of battery life).

Other notable highlights include Optical Heart Rate Monitor, Altimeter, Ambient Light Sensor, 15+ Exercise Modes, Enhanced PurePulse Heart Rate Tracking, Female Health Tracking and Fitbit Pay – offers contactless payments.

Running on Fitbit OS 2.0, the Versa provides WiFi 802.11 b/g/n, Bluetooth 4.0, GPS and NFC (in special editions) for connectivity. The Fitbit Versa will go on sale in India in Q2 for Rs. 19,999 (about $308). [FoneArena]

The post Fitbit Versa Fitness Smartwatch appeared first on TechFresh, Consumer Electronics Guide.

Gadgets – TechCrunch John Biggs

When humanity’s back is against the wall and the robots have us cornered I’d say I’m all for whanging a few with a baseball bat. However, until then, we must be kind to our mechanical brethren and this robotic tortoise will help our kids learn that robot abuse is a bad idea.

Researchers at Naver Labs, KAIST, and Seoul National University created this robot to show kids the consequences of their actions when it comes to robots. Called Shelly, the robot reacts to touches and smacks. When it gets scared it changes color and retracts into its shell. Children learn that if they hit Shelly she will be upset and the only thing missing is a set of bitey jaws.

“When Shelly stops its interaction due to a child’s abusive behavior, the others in the group who wanted to keep playing with Shelly often complained about it, eventually restraining each other’s abusive behavior,” Naver Labs’ Jason J. Choi told IEEE. The study found that Shelly’s reactions reduced the amount of abuse the robot took from angry toddlers.

The researchers showed off Shelly at the ACM/IEEE International Conference on Human Robot Interaction last week.

Gadgets – TechCrunch Darrell Etherington

Game controller compatibility is a labyrinthine nightmare most of the time: Some controllers work with some platforms some of the time, but it’s very hard to keep track of how and when. 8bitdo’s latest accessory adds some simplicity to the mix, enabling use of Xbox One, PlayStation 4 and Nintendo Switch controllers with Switch, Windows and macOS systems quickly and easily.

Yes, that means you can play your PC or Mac games with your favorite Xbox One X/S or DualShock 3/4 controller, or even use a Joy-Con. It also means that you can use a DualShock controller to play Breath of the Wild on the Switch, ion that’s what you want to do.

The USB dongle also works with Android TV hardware, and with Raspberry Pi-based devices. It supports DualShock 4 vibration and 6-axis motion control on Switch, and it works lag-free for low latency gaming requirements. It’s also a tiny bit smaller than either the dedicated Xbox or PlayStation dedicated PC wireless controller USB adapters (and supports a broader range of platforms).

Oh, and it’s also just $20 from Amazon. I’ve been using it for a couple of weeks now and it performs exactly as advertised. If you’re looking to cut down your controller clutter or just have a strong preference for once design over another, this is definitely a smart buy.

Gadgets – TechCrunch Darrell Etherington

One of the better 360-degree cameras out there just got a lot better: The Insta360 One, a standalone 4K 360 camera with a built-in iPhone or Android hardware connector now supports FlowState onboard stabilization. This provides much better automatic stabilization than the Insta360 One supported at launch, and enables a bunch of new editing and formatting features that really improve the value proposition of the $299 gadget.

As you can see above, FlowState allows you to do a lot more with your footage after the fact, including creating smooth pans across footage for exporting to more standard vertical and wide-angle formats (since it’s very rare that people actually watch all that much true 360-degree footage). The changes make Insta360’s device a lot more like the Rylo camera in use, and more suitable for action sports and other adventure-friendly uses.

Users can now add transition points in the mobile app to create dynamic camera angle changes, and also set object or person active tracking. There’s a hyper lapse feature that speeds up time for pulling more action out of even leisurely bike rides, and you can also take over manually to basically direct the experience as if you were shooting it in real time with a traditional video camera, including doing things like zooming.

This update will be pushed out via the updated Insta360 app, and will require a firmware update for existing cameras. It’s a big upgrade for existing users, and a compelling reason to pick this up if you’re looking for something that’s easy to use, compatible with a range of mounts (it has a standard tripod screw mount in its base) and relatively affordable (cheaper than a GoPro Hero 6).

Gadgets – TechCrunch Devin Coldewey

During CES, the single piece of electronics I spent the most time with, apart from my laptop and camera, was a Mattel Dungeons & Dragons Computer Fantasy Game handheld. This decades-old device held the attention of John Biggs and myself through quite a few drinks as we navigated its arcane interface (eventually slaying the dragon, thank you). These cheap handhelds, sold as impulse buys at drug stores and Toys ‘R Us (RIP), are the latest thing to be collected and emulated in full by MAME and the Internet Archive.

At first when I heard this, I was happy but not particularly impressed. They’re great little devices — mostly terrible games, albeit a nostalgic kind of terrible — but how complicated can they be?

Oh, quite complicated, it turns out.

Unlike, say, an NES ROM, these little gadgets don’t have their graphics palettized, their logic isolated, etc. No, each one of these things is a strange and unique little machine. They must be carefully taken apart and their logic teased out by experts.

For one thing, the graphics aren’t pixels accounted for digitally. They’re etched into the liquid crystal system, to be activated when a charge runs through them. In other words, all the graphics are right there on the same screen, arranged like puzzle pieces.

So you may remember Space Jam looking like this:

But the LCD layer looks like this:

All that is hard-wired into the electronic part, where the logic resides telling which pieces to light up and when.

I won’t go into the details — read the interesting Internet Archive post if you’re curious. Basically it was a ton of hard work by a bunch of dedicated folks on the MAME crew. Incidentally, thanks to them and everyone else who’s kept that project going for years and years.

The only thing that’s missing is the interface — that is, the plastic. These things were great not because they were actually great games, but because they cost like $10 and would keep your kid occupied on a road trip for a few hours while they got beaten over and over again by the first three enemies. The cheap plastic enclosures and gaudy decorations are part of the fun.

No one wants to play this:

But this?

I’d definitely bug my mom to get me that. In fact, I think I did.

You can check out the scores of games the teams have already digitized at the Handheld History page, and if you’re in an emulatin’ mood, check out the other gazillion systems you can play in the browser in Archive’s Internet Arcade and Console Living Room.

Gadgets – TechCrunch Devin Coldewey

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring.

Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything.

Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.

Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.

This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.

Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.

The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.

Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.

Tesla’s Autopilot relies mostly on radar.

Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.

The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.

Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is, and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.

The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians, and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.

Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.

That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).

Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.

Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions, and the like.

It’s also arguably the most advance and closely guarded part of any self-driving car system and so is kept well under wraps.

It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high stakes matters like driving on public roads.

We’ll know more as Uber, local law enforcement, federal authorities, and others investigate the accident.

Gadgets – TechCrunch Devin Coldewey

IBM is hard at work on the problem of ubiquitous computing, and its approach, understandably enough, is to make a computer small enough that you might mistake it for a grain of sand. Eventually these omnipresent tiny computers could help authenticate products, track medications and more.

Look closely at the image above and you’ll see the device both on that pile of salt and on the person’s finger. No, not that big one. Look closer:

It’s an evolution of IBM’s “crypto anchor” program, which uses a variety of methods to create what amounts to high-tech watermarks for products that verify they’re, for example, from the factory the distributor claims they are, and not counterfeits mixed in with genuine items.

The “world’s smallest computer,” as IBM continually refers to it, is meant to bring blockchain capability into this; the security advantages of blockchain-based logistics and tracking could be brought to something as benign as a bottle of wine or box of cereal.

A schematic shows the parts (you’ll want to view full size).

In addition to getting the computers extra-tiny, IBM intends to make them extra-cheap, perhaps 10 cents apiece. So there’s not much of a lower limit on what types of products could be equipped with the tech.

Not only that, but the usual promises of ubiquitous computing also apply: this smart dust could be all over the place, doing little calculations, sensing conditions, connecting with other motes and the internet to allow… well, use your imagination.

It’s small (about 1mm x 1mm), but it still has the power of a complete computer, albeit not a hot new one. With a few hundred thousand transistors, a bit of RAM, a solar cell and a communications module, it has about the power of a chip from 1990. And we got a lot done on those, right?

Of course at this point it’s very much still a research project in IBM’s labs, not quite a reality; the project is being promoted as part of the company’s “five in five” predictions of turns technology will take in the next five years.

Gadgets – TechCrunch Danny Crichton

Over the weekend, Mark Gurman at Bloomberg reported that Apple has apparently built out a microLED display laboratory in California for testing and manufacturing small batches of the next-generation screen technology, presumably for its iPhone and other devices. Apple had previously acquired microLED startup LuxVue in 2014.

The news of a secret research lab fits into a larger narrative about Apple’s deeper and more expensive focus on research and development. Neil Cybart of Above Avalon, a subscription blog focused on Apple, noted that Apple “is on track to spend $14 billion on R&D in FY2018, nearly double the amount spent on R&D just four years ago” and also pointed out that “The $14 billion of R&D expense that Apple will spend in FY2018 will be more than the amount Apple spent on R&D from 1998 to 2011.”

Those are incredible numbers for any company, but the scale of the R&D output even for Apple is exceptional. Even more notably, Apple’s R&D expenses as a percentage of revenue have been steadily increasing over the past few years and are projected to reach a decade high of 5.3% this year despite higher revenues, according to Cybart.

That revenue percentage may be high for Apple, but it is remarkably low compared to peers in the technology industry. Other companies like Google and Facebook are spending more than double and sometimes triple Apple’s percentage of revenue on R&D. Part of that reason is Apple’s sheer revenues and scale, which allows Apple to amortize R&D over greater revenues than its competitors.

The more interesting observation though is that Apple has traditionally avoided having to do the sorts of expensive R&D work involved in areas like chip design and display manufacturing. Instead, the company’s focus has traditionally been on product development and integration, areas that certainly aren’t cheap, but are less expensive than bringing say a new LCD technology to market.

Apple doesn’t produce wireless modems or power management systems for its phones, instead using components from companies like Qualcomm, as in the iPhone X. Even highly-touted features like the iPhone X’s screen aren’t designed by Apple, but instead are designed and manufactured by others, which in the case of the screen was Samsung Display. Apple’s value-add was integrating the display into the phone (that edgeless screen) as well as writing the software that calibrated the color of the screen and ensured its exceptional quality.

For years, that integration-focused R&D model has been a win-win for Apple. The company can use the best technology available at low prices due to its negotiating leverage. Plus, the R&D costs of those components can be amortized not just against iPhones, but all other devices using the technology as well. That meant Apple put its resources behind high-value product development, and could maintain some of the best margins in the hardware industry by avoiding some of the costlier research areas required for its products.

That R&D model changed after Apple bought P.A. Semi almost exactly a decade ago for $278 million. Apple moved from an R&D strategy focused on product development to increasingly owning the key hardware components of its devices. No where is that more visible than in the processing cores at the center of the iPhone. The A11 Bionic processor in the iPhone X, for instance, is completely custom-designed by Apple, and manufactured by TSMC.

Indeed, the processor is an obvious place to start vertically integrating, since it provides so much of the other functionality of the device and also has a large influence on battery life. The FaceID feature, for instance, is powered by a “neural engine” component of the A11 chip.

There is a direct line between creating differentiated features that consumers recognize and are willing to shell out top dollar for, and building out the sorts of custom components that Apple has shied away from in the past. The display is obviously a critical point of differentiation, and so it shouldn’t be surprising that Apple increasingly wants to bring that technology in-house so it can compete better with Samsung .

Alright, so Apple is spending more on R&D to increase differentiation – sounds great. Indeed, one narrative of these expenses is that Apple is investing from a position of strength. Through its sheer force of will, it has become one of the most valuable companies in the world, and it dominates many of the markets in which it competes, most notably smartphones. It has incredible brand loyalty with a millions of customers, and it sees an opportunity to expand into new device categories like automotive in order to continue growing and owning more markets. In other words, it is expanding R&D to propel growth.

The more negative view is that Apple is struggling to maintain its hold on a shrinking smartphone industry, and the increasing R&D spend is really a defensive maneuver designed to protect its high sale prices (and thus margins) against significantly cheaper competitors who offer nearly equivalent functionality. Apple’s custom hardware powers its exclusive features, and that creates the differentiation needed to sustain revenues going forward.

There is truth in both narratives, but one thing is for certain, the margin pressure on Apple is increasing. While everyone is making educated guesses at iPhone X sales, many analysts believe that sales have been, and will continue to be weaker than expected, driven by the device’s high cost. If that is true, then higher prices will not be able to offset higher research and developments costs, and the combination will put more of a vice grip on Apple’s future smartphone innovation than the company has previously experienced.

It seems obvious that a company with hundreds of billions of dollars on the balance sheet should just be investing more of that into R&D initiatives like microLED. But analysts care not just about top-line revenue, but also the margins of that revenue. Apple’s increasing spend and declining unit sales portend tougher financial questions for the company going forward.

Gadgets – TechCrunch Darrell Etherington

Action cameras are a gadget that mostly cater to a person’s wish to see themselves in a certain way: Most people aren’t skiing off mountains or cliff diving most of the time, but they aspire to. The issue with most action cameras, though, is that even when you actually do something cool, you still have to shoot the right angle to capture the moment, which is itself a skill. That’s the beauty of Rylo, a tiny 360 camera that minimizes the skill required and makes it easy to get the shots you want.

Rylo is compact enough to have roughly the footprint of a GoPro, but with dual lenses for 4K, 360-degree video capture. It has a removable battery pack good for an hour of continuous video recording, and a micro USB port for charging. In the box, you’ll get either a micro USB to Lightning, or micro USB to micro USB and USB C cables, depending on whether you pick up the Android or the iOS version, and you handle all editing on the mobile device you already have with you always.

The device itself feels solid, and has stood up to a lot of travel and various conditions over the course of my usage. The anodized aluminum exterior can take some lumps, and the OLED screen on the device provides just enough info when you’re shooting, without overwhelming. There’s no viewfinder, but the point of the Rylo is that you don’t need one – it’s capturing a full 360-degree image all the time, and you position your shot after the fact in editing.

Rylo includes a 16GB microSD card in the box, too, but you can use up to 256GB versions for more storage. A single button on top controls both power functions and recording, and the simplicity is nice when you’re in the moment and just want to start shooting without worrying about settings.

The basic functionality of Rylo is more than most people will need out of a device like this: Using the app, you can select out an HD, flat frame of video to export, and easily trim the length plus make adjustments to picture, including basic edits like highlights, color and contrast. Rylo’s built-in stabilization keeps things surprisingly smooth, even when you’re driving very fast along a bumpy road with what amounts to nearly race-tuned tires and suspension.

Then, if you want to get really fancy, you can do things like add motion to your clips, including being able to make dead-simple smooth pans from one focus point to another. The end result looks like you’re using a gimbal or other stabilized film camera, but all the equipment you need is the Rylo itself, plus any mount, including the handle/tripod mount that comes in the box, or anything that works with a GoPro.

You can even set a specific follow point, allowing you to track a specific object or person throughout the clip. This works well, though sometimes it’ll lose track of the person or thing if there’s low light or the thing it’s following gets blocked. The app will let you know it’s lost its target, however, and in practice it works well enough to create good-looking videos for things like bicycling and riding ATVs, for instance.

Other companies are trying to do similar things with their own hardware, including GoPro with the Fusion and Insta360 with its Insta360 One. But Rylo’s solution has the advantage of being dead simple to use, with easily portable hardware that’s durable and compatible with existing GoPro mount accessories. The included micro USB to Lightning cable isn’t easily replaced, except for from Rylo itself, and it’s also small and easy to lose, so that’s my main complaint when it comes to the system as a whole.

In the end, the Rylo does what it’s designed to do: Takes the sting out of creating cool action clips and compelling short movies for people working mostly from their mobile devices. It’s not as flexible for pros looking for a way to integrated more interesting camera angles into their desktop workflow because of how tied content captured on the Rylo is to the Rylo app itself, but it seems clearly designed for a consumer enthusiast market anyway.

At $499, the Rylo isn’t all that much more expensive than the GoPro Hero 6. It’s still a significant investment, and the image quality isn’t up to the 4K video output by the GoPro, but for users who just want to make cool videos to share among friends using social tools, Rylo’s ease of use and incredibly low bar in terms of filming expertise required is hard to beat.

Gadgets – TechCrunch Megan Rose Dickey

When you get a new car, and you’re feeling like a star, the first thing you’re probably going to do is ghost ride it. This is where the Owl camera can come in.

I’ve been testing Owl, an always-on, two-way camera that records everything that’s happening inside and outside of your car all day, every day for the last couple of weeks.

The Owl camera is designed to monitor your car for break-ins, collisions and police stops. Owl can also be used to capture fun moments (see above) on the road or beautiful scenery, simply by saying, ‘Ok, presto.’

If Owl senses a car accident, it automatically saves the video to your phone, including the 10 seconds before and after the accident. Also, if someone is attracted to your car because of the camera and its blinking green light, and proceeds to steal it, Owl will give you another one.

For 24 hours, you can view your driving and any other incidents that happened during the day. You can also, of course, save footage to your phone so you can watch it after 24 hours.

Setting it up

The two-way camera plugs into your car’s on-board diagnostics port (Every car built after 1996 has one), and takes just a few minutes to set up. The camera tucks right in between the dashboard and windshield. Once it’s hooked up, you can access your car’s camera anytime via the Owl mobile app.

I was a bit skeptical about the ease with which I’d be able to install the camera, but it was actually pretty easy. From opening the box to getting the camera up and running, it took fewer than ten minutes.

Accessing the footage

This is where it can get a little tricky. If you want to save footage after the fact, Owl requires that you be physically near the camera. That meant I had to put on real clothes and walk outside to my car to access the footage from the past 24 hours in order to connect to the Owl’s Wi-Fi. Eventually, however, Owl says it will be possible to access that footage over LTE.

But that wasn’t my only qualm with footage access. Once I tried to download the footage, the app would often crash or only download a portion of the footage I requested. This, however, should be easily fixable, given Owl is set up for over-the-air updates. In fact, Owl told me the company is aware of that issue and is releasing a fix this week. If I want to see the live footage, though, that’s easy to access.


Owl is set up to let you know if and when something happens to your car while you’re not there. My Owl’s out-of-the-box settings were set to high sensitivity, which meant I received notifications if a car simply drove by. Changing the settings to a lower sensitivity fixed the annoyance of too many notifications.

Since installing the Owl camera, there hasn’t been a situation in which I was notified of any nefarious behavior happening in or around my car. But I do rest assured knowing that if something does happen, I’ll be notified right away and will be able to see live footage of whatever it is that’s happening.

My understanding is that most of the dash cams on the market aren’t set up to give you 24/7 video access, nor are they designed to be updatable over the air. The best-selling dash cam on Amazon, for example, is a one-way facing camera with collision detection, but it’s not always on. That one retails for about $100 while Amazon’s Choice is one that costs just $47.99, and comes with Wi-Fi to enable real-time viewing and video playback.

Owl is much more expensive than its competition, retailing at $299, with LTE service offered at $10 per month. Currently, Owl is only available as a bundle for $349, which includes one year of the LTE service.

Unlike Owl’s competition, however, the device is always on, due to the fact it plugs into your car’s OMD port. That’s the main, most attractive differentiator for me. To be clear, while the Owl does suck energy from your car’s battery, it’s smart enough to know when it needs to shutdown. Last weekend, I didn’t drive my car for over 24 hours, so Owl shut itself down to ensure my battery wasn’t dead once I came back.

Owl, which launched last month, has $18 million in funding from Defy Ventures, Khosla Ventures, Menlo Ventures, Sherpa Capital and others. The company was founded by Andy Hodge, a former product lead at Apple and executive at Dropcam, and Nathan Ackerman, who formerly led development for Microsoft’s HoloLens.

P.S. I was listening to “Finesse” by Bruno Mars and Cardi B in the GIF above.