Lynq is a dead-simple gadget for finding your friends outdoors

If you’ve ever been hiking or skiing, or gone to a music festival or state fair, you know how easy it is to lose track of your friends, and the usually ridiculous exchange of “I’m by the big thing”-type messages. Lynq is a gadget that fixes this problem with an ultra-simple premise: it simply tells you how far and in what direction your friends are, no data connection required.

Apart from a couple of extra little features, that’s really all it does, and I love it. I got a chance to play with a prototype at CES and it worked like a charm.

The peanut-shaped devices use a combination of GPS and kinetic positioning to tell where you are and where any linked Lynqs are, and on the screen all you see is: Ben, 240 feet that way.

Or Ellie.

No pins on a map, no coordinates, no turn-by-turn directions. Just a vector accurate to within a couple of feet that works anywhere outdoors. The little blob that points in their direction moves around as quick as a compass, and gets smaller as they get farther away, broadening out to a full circle as you get within a few feet.

Up to 12 can link up, and they should work up to three miles from each other (more under some circumstances). The single button switches between people you’re tracking and activates the device’s few features. You can create a “home” location that linked devices can point toward, and also set a safe zone (a radius from your device) that warns you if the other one leaves it. And you can send basic preset messages like “meet up” or “help.”

It’s great for outdoors activities with friends, but think about how helpful it could be for tracking kids or pets, for rescue workers, for making sure dementia sufferers don’t wander too far.

The military seems to have liked it as well; U.S. Pacific Command did some testing with the Thai Ministry of Defence and found that it helped soldiers find each other much faster while radio silent, and also helped them get into formation for a search mission quicker. All the officers involved were impressed.

Having played with one for half an hour or so, I can say with confidence that it’s a dandy little device, super intuitive to operate, and was totally accurate and responsive. It’s clear the team put a lot of effort into making it simple but effective — there’s been a lot of work behind the scenes.

Because the devices send their GPS coordinates directly to each other, the team created a special compression algorithm just for that data — because if you want fine GPS, that’s actually quite a few digits that need to be sent along. But after compression it’s just a couple of bytes, making it possible to send it more frequently and reliably than if you’d just blasted out the original data.

The display turns off automatically when you let it go to hang by its little clip, saving battery, but it’s always receiving the data, so there’s no lag when you flip it up — the screen comes on and boom, there’s Betty, 450 feet thataway.

The only real issue I had is that the single-button interface, while great for normal usage, is pretty annoying for stuff like entering names and navigating menus. I understand why they kept it simple, and usually it won’t be a problem, but there you go.

Lynq is doing a pre-order campaign on Indiegogo, which I tend to avoid, but I can tell you for sure that this is a real, working thing that anyone who spends much time with friends outdoors will find extremely useful. They’re selling for $154 per pair, which is pretty reasonable, and since that price will probably jump significantly later, I’d say go for it now.


Source: Tech Crunch

Amazon Sumerian, a platform for building AR, VR and 3D apps, is now open to all

Last November, AWS announced a new product called Amazon Sumerian, a toolkit and platform for developers to build “mixed reality” apps — that is, using virtual reality, augmented reality and 3D — without needing to have any specialised programming or graphics skills. And today, after running the service in a private beta for the last several months, Sumerian is now generally available.

In addition to being able to build a mixed reality app, you can also deploy it without writing custom code, Amazon says. The web-based editor also integrates with Amazon Lex for natural language and AI, Polly to turn text into speech, AWS Lambda for running code, AWS IoT to connect with Amazon’s IoT platform, and Amazon DynamoDB if you are running a NoSQL database. It supports WebGL and WebVR and Oculus Rift, HTC Vive, iOS and Android ARCore. Support for the new Oculus Go is coming, AWS said.

AWS has made huge strides in building out its cloud business, where developers, startups and much larger and mature organizations use the company’s infrastructure to host apps and other services, in what looks to be on track to be a $20 billion business this year. More recently, Amazon has been looking at ways of expanding its reach (and revenues) with these companies by offering a deeper range of services running within the cloud. Amazon Sumerian is a part of that strategy.

As Kyle Roche, the GM of Amazon Sumerian, described it, the company saw a gap in the market between the rise of new VR, AR and 3D tech, and a huge pool of organizations that might want to use that technology, but either lack the expertise and resources to do so, or would like to test something out before dedicating those resources more seriously.

“We are targeting enterprises who don’t have the talent in-house,” he said. Tackling new tech can sometimes be “too overwhelming, and this is one way of getting inspiration or prototypes going. Sumerian is a stable way to bootstrap ideas and start conversations. There is a huge business opportunity here.”

He said that early users in the closed beta have included a company developing training for medical devices, Mapbox building a framework for geospatial rendering, a business designing a walk-through a hotel lobby, e-sports companies, and some media and entertainment properties.

Adam Schouela, the VP of Fidelity Labs, said that the financial services giant has been working on a range of potential applications, including solutions to train its customer relations teams, ways of visualising financial modelling, and services for its customers to discover and use Fidelity’s services.

“What we try to do is look at emerging tech and rapidly build prototypes for Fidelity and the financial services industry,” he told TechCrunch.  We’ve done a lot of work in the voice interfaces and user interfaces with AR and VR. When we saw what Sumerian was providing and potential integration between voice interfaces and VR, we thought this was a great opportunity. With voice interfaces one of the great use cases is when your eyes and hands are otherwise busy. With VR, it’s stuck to face and you can’t see and your hands are busy so voice happens to be a great way of interacting with virtual environments.”

A demo of one of Fidelity’s services is here:


Source: Tech Crunch

The 8 features Amazon and Google must add to the Echo and Home

The Amazon Echo and Google Home are amazing devices and both have advantages over the other. In my home, we use the Amazon Echo and have them around the house and outside. I have the original in the living room, a Dot in bedrooms, my office and outside, a Tap in my woodworking workshop and Spots in the kids’ room (with tape over the camera). They’re great devices but far from perfect. They’re missing several key features and the Google Home is missing the same things, too.

I polled the TechCrunch staff. The following are the features we would like to see in the next generation of these devices.

IR Blaster

Right now, it’s possible to have the Echo and Home control a TV, but only through 3rd party devices. If the Echo or Home had a top-mounted 360-degree IR Blaster, the smart speakers could natively control TVs, entertainment systems, and heating and cooling units.

Echo and Homes are naturally placed out in the open, making the devices well suited to control devices sporting an infrared port. Saying “turn on the TV” or “turn on the AC” could trigger the Echo to broadcast the IR codes from the Echo to the TV or wall-mounted AV unit.

This would require Amazon and Google to integrate a complete universal remote scheme into the Echo and Home. That’s not a small task. Companies such as Logitech’s Harmony, Universal Remote Control and others are dedicated to ensuring their remotes are compatible with everything on the market. It seems like an endless battle of discovering new IR codes, but one I wish Amazon and Google would tackle. I would like to be able to control my electric fireplace and powered window shades with my Echo without any hassle.

A dedicated app for music and the smart home

The current Home and Alexa apps are bloated and unusable for daily use. I suspect that’s by design, as it forces the users to use the speaker for most tasks. The Echo and Home deserve better.

Right now, Amazon and Google seemingly want users to use voice to set up these devices. And that’s fine to a point. If a user is going to use these speakers for listening to Spotify or controlling a set of Hue lights, the current app and voice setup works fine. But if a user wants an Echo to control a handful of smart home devices from different vendors, a dedicated app for the smart home ecosystem should be available — bonus points if there’s a desktop app for even more complex systems.

Look at Sonos. The Sonos One is a fantastic speaker and arguably the best sounding multi-room speaker system. Even though Alexa is built into the speaker, the Sonos app is still useful as it would be for the Echo and Home, too. A dedicated music app would let Echo and Home users more easily browse music sources and select tracks and control playback on different devices.

The smart speakers can be the center of complex smart home ecosystems and deserve a competent companion app for setup and maintenance.

Logitech’s Harmony app is a good example here as well. This desktop app allows users to set up multiple universal remotes. The same should be available for Echo and Home devices. For example, my kids have their own Spotify accounts and do not need voice access to my Vivint home security system or the Hue bulbs in the living room. I want a way to more easily customize the Echo devices throughout the home. Setting up such a system is currently not possible and would be clunky and tiresome to do through a mobile app unless it’s dedicated to the purpose.

Mesh networking

Devices such as Eero and Netgear’s Orbi line are popular because they easily flood an area with wi-fi that’s faster and more reliable than wi-fi broadcasted by a single access point. Mesh networking should be included in the Google Home or Amazon Echo.

These devices are designed to be placed out in the open and in common spaces, which is also the best placement for wi-fi routers. Including a mesh networking extender in these devices would increase their appeal and encourage owners to buy more while also improving the owner’s wi-fi. Everyone wins.

Buying Eero seems like the logical play for Amazon or Google. The company already makes one of the best mesh networking products on the market. The products are well designed and packaged in small enclosures. Even if Google or Amazon doesn’t build the mesh networking bits directly into the speaker, it could be included in the speaker’s wall power supply allowing both companies to quickly implement it across its product lines and offer it as a logical add-on as a secondary purchase.

3.5mm optical output

I have several Dots hooked up to full audio systems thanks to the 3.5mm output. But it’s just two-channel analog, which is fine for NPR but I want more.

For several generations, the MacBook Pro rocked an optical output through the 3.5mm jack. I suspect it wasn’t widely used, which led to Apple cancelling it on the latest generation. It would be lovely if the Echo and Home had this option, too.

Right now, the digital connection would not make a large difference in the quality of the audio since the device streams at a relatively low bit-rate. But if either Google or Amazon decide to pursue higher quality audio like offered from Tidal, this would be a must-have addition to the hardware.

Outdoor edition

I spend a good amount of time outside in the summer and managed to install an Echo Dot on my deck. The Dot is not meant to be installed outside, and though my setup has survived a year outside, it would be great to have an all-weather Echo that was much more robust and weather resistant.

Here’s how I installed an Echo Dot on my deck. Mount one of these electrical boxes in a location that would keep the Echo Dot out of the rain. Pop out one of the sides of the box and fit the Dot inside the box. The Dot should be exposed and facing down. Plug in the power cable and 3.5mm cable through the hole in the side and run the audio to an amp like this to power a set of outside speakers. I used asphalt shingles to cover the topside of both devices to protect them from water dripping off the deck. This setup has so far survived a Michigan summer and winter.

I live outside a city and have always had speakers outside. From my Dot’s location under the deck, it still manages to pick up my voice allowing control of Spotify and my smart home while I’m around my yard. It’s a great experience and I wish Amazon or Google made a version of its smart speakers so more people could take their voice assistants outside.

Improved privacy

There’s an inherent creepiness with having devices always listening throughout your home. The Google Home Mini was even caught recording everything and sending the recordings back to Google. Consumers should have more options in how Amazon and Google handle the recorded data.

There should be an option to allow the user to opt out of sending recordings back to Amazon or Google even if concessions have to be made. If needed give the user the option of opting out of several features or let the user decide if the recordings should be deleted after a few days or weeks.

Consumers are soon going to be looking for this sort of control as the topic grows in intensity following Facebook’s blunder and it would be wise for Google and Amazon to get ahead of consumers’ expectations.

A new portable speaker

I use a Tap in my workshop and it does a fine job. But the cloth covering gets dirty. And I discovered it’s not durable after dropping it once. What’s worse, if the always-listening mode is activated, the speaker must be put back on its dock after 12 hours or the battery completely dies.

The Tap was one of the first Amazon Echo devices. Originally users had to hit a button to activate Alexa, but the company added voice activation after it launched. It’s a handy speaker but it’s due for an upgrade.

A portable Echo or Home needs to be all-weather, durable and easily cleanable. It needs to have a dock and built-in micro-USB port, and it must have voice activated control — bonus points if it can lock out unknown voices.

Improved accessibility features

Voice assistant devices are making technology more accessible than ever but there are still features that should be added. There are lots of people who have speech impairments who can hear perfectly well, but an Alexa Echo or Google Home won’t recognize their speech accurately at all.

Apple added this ability to Siri. Users can text it queries. The option is available on iOS 11 under the accessibility menu. The Google Home and Amazon Echo should have the same feature.

Users should be able to send text queries to Echo via their mobile phone (from within the Alexa app via a free form text-styled chatbot) and still listen to the response and still take advantage of all the skills and smart home integration. From a technical point of view, it would be trivial since it wouldn’t need any voice to text translation and it would increase the appeal of the device to a new market of shoppers.

Motion sensors

There are several cases where an included motion sensor would improve the user experience of a voice assistant.

A morning alarm could increase in intensity if motion isn’t detected — or likewise, it could be deactivated by sensing a set amount of motion. Motion detectors could also act as light switches, switching on lights if motion is detected and then switching off lights if motion is no longer detected. But there’s more, automatic lowering of volume if motion is not detected, additional sensors for alarms, and detecting users for HVAC systems.


Source: Tech Crunch

Lost In Space is coming back for a second season

Netflix today announced that it will release a second season of Lost In Space, the big-budget scifi that first debuted earlier in April of this year.

The series is a revamp of the original show from the 1960s. Season One, which included 10 episodes, follows the Robinson family on their journey from Earth to Alpha Centauri. Along the way, they stumble across extraterrestrial life and a wide array of life-or-death situations.

Many of the elements from the original show have been reimagined, not least of which being the role of Mr. Smith going to Parker Posey, who plays the delightfully wicked villain.

We reviewed the show on the Original Content podcast in this episode, and struggled to find any meaningful flaws.


Source: Tech Crunch

Apple’s self-driving car fleet grows to 55 in California

Apple now has 55 self-driving cars registered with the DMV, compared to 27 earlier this year and just three last year. That means Apple has the second largest fleet of self-driving cars in California.

Apple now has more cars registered than Waymo, which has 51, according to the Department of Motor Vehicles. General Motor’s Cruise, however, leads the pack with 104 vehicles. In total, the DMV has provided self-driving car permits with safety drivers to 53 companies, resulting in a total of 409 vehicles and 1,573 safety drivers.

Here’s a quick overview of where some of the autonomous driving leaders stand in terms of registered cars:

  • General Motor’s Cruise: 104
  • Apple: 55
  • Waymo: 51
  • Tesla: 39
  • Drive.ai: 14
  • Toyota: 11
  • NVIDIA: 8
  • Lyft: 4
  • Aurora: 4
  • Voyage: 3
  • Didi: 1

Number of safety drivers approved:

  • Apple: 83
  • Waymo: 338
  • GM Cruise: 407

To be clear, the companies listed above only have permits to test self-driving cars with safety drivers on board. As of now, the DMV has not issued any permits for complete driverless testing. In order to conduct driverless testing, companies must have previously tested the vehicles in controlled conditions. The vehicles must also, among many other things, meet the definition of an SAE Level 4 or 5 vehicle. The DMV is currently reviewing two driverless testing permit applications, a DMV spokesperson told TechCrunch.


Source: Tech Crunch

Google and Levi’s ‘connected’ jacket will let you know when your Uber is here

Remember Project Jacquard? Two years ago, Google showed off its “connected” jean jacket designed largely for bike commuters who can’t fiddle with their phone. The jacket launched this past fall, in partnership with Levi’s, offering a way for wearers to control music, screen phone calls, and get directions with a tap or brush of the cuff. Today, Google is adding more functionality to this piece of smart clothing, including support for ride-sharing alerts, Bose’s “Aware Mode,” and location saving.

The features arrived in the Jacquard platform 1.2 update which hit this morning, and will continue to roll out over the week ahead.

It’s sort of odd to see this commuter jacket adding ride-sharing support, given that its primary use case, so far, has been to offer a safer way to interact with technology when you can’t use your phone – namely, while biking, as showcased in the jacket’s promotional video. (See above).

But with the ride-sharing support, it seems that Google wants to make the jacket more functional in general – even for those times you’re not actively commuting.

To use the new feature, jacket owners connect Lyft and/or Uber in the companion mobile app, and assign the “rideshare” ability to the snap tag on the cuff. The jacket will then notify you when your ride is three minutes away and again when it has arrived. When users receive the notification, they can brush in from their jacket to hear more details about their ride.

Another new addition is support for Bose’s Aware Mode, which picks up surrounding sounds and sends them to the user’s ear through supported headphones. The feature is helpful in terms of offering some noise reduction without losing the ability to hear important things happening around you – like approaching vehicles, horns, and other people, for example.

Jacquard will now allow users to turn any gesture into a toggle for Aware Mode to turn it on or off for Bose’s QC30 and QC35 headphones.

And lastly, the jacket will support being able to drop a pin on the map to save a location then see, share or edit it from the app’s Activity screen.

The jacket continues to be a curious experiment with connected clothing – especially given that much of what the jacket can do, can now be accomplished with a smartwatch these days.

Google and Levi’s aren’t sharing sales numbers, so it’s hard to speak to adoption at this point, either.

However, a Google spokesperson did tell us that “[Levi’s is] pleased with the response and continue[s] to be excited to hear from people about what’s useful and what requests they have once they purchase the jacket.”

Given the addition of ride-sharing support, one wonders if maybe the focus is expanding beyond the bike commuters crowd, to those who just don’t like having their smartphone out, in general.


Source: Tech Crunch

You can now try Smart Compose in the new Gmail

Smart Compose, the experimental autocomplete feature in the new Gmail on the web that Google announced at its I/O conference last week, is now available for testing.

Smart Compose is an AI tool that promises to automatically finish your sentences for you, using what it has learned about how people typically write. Based on my experience so far, it’s not quite as good as Google’s demo made us believe it was, but it’s still quite useful and will likely save you a few keystrokes as you go about your day.

You’ll have to enable “Experimental Access” in the new Gmail settings to be considered for this first test. I did so last week and the new feature is now live in my account.

 

I admit that I always feel a bit empty inside when I use Smart Reply, the somewhat more limited version of this feature in the mobile Gmail app that provides you with a few potential two or three-word replies. And I always wonder if the person on the other end knows I was too lazy to write a real answer. But it also makes me feel more productive because I end up answering more emails. It’s a trade-off that Smart Reply is currently winning. My guess is, the same will happen with Smart Compose.

For now, though, Smart Compose is still quite limited (and only works in English). When it works, it’s almost magical, and the suggestion is almost always spot on. But it only works for rather trite sentences so far. If you go off the script, you could write paragraph after paragraph without ever seeing the prompt.

It’ll happily autocomplete any cliché and write “Hi [name],” at the top of your email, which I guess is something, but that doesn’t feel especially intelligent. We’re still looking at an experimental feature, though, and these tools tend to get better as they learn more about how users behave.


Source: Tech Crunch

Nike debuts its most ambitious SNKRS stash drop for the Championship Tour featuring Kendrick Lamar and SZA

On a mild Thursday night at the Los Angeles Forum, Nike’s public relations team and a group of journalists from some of the country’s leading lifestyle, tech, and general interest websites gathered to see the debut of Nike’s most ambitious SNKRS stash drop.

Launched in conjunction with Kendrick Lamar’s Top Dawg Entertainment, the collaboration between Nike and Lamar marks a series of firsts for the world’s largest sports and lifestyle brand.

The combined effort is the first capsule collection that Nike has done with a musician. It’s also the first time that anyone currently working at the company can remember the apparel company signing on with a musician for select tour merchandise, and the debut of the stash drop through the SNKRS app was the largest the company’s tech had tried to tackle.

For concertgoers, rolling up to the concert in Supreme sweats, Yeezys, Adidas, Pumas… and, of course, Nikes, the SNKRS stash drop would be a surprise. For folks who had downloaded Nike’s SNKRS app, they’d be able to buy and reserve a pair of Kendrick Lamar’s limited edition Cortez Kenny IIIs at the concert.

At least on the first night, things didn’t go as planned.

Working with live events like concerts, where timing is less regimented than at a typical sporting event (which are marked by tip offs and halftimes that adhere to a pretty regimented schedule), proved too much for the initial rollout of the company’s stash drop.

Select NikePlus members received an initial push notification of the Stash drop and a card in the SNKRS feed also advertised the special stash drop, in addition to a notification that flashed onscreen between the (amazing)  SchoolboyQ set and SZA’s (equally amazing) performance.

There will be other chances to get the timing down, but for the first concert in Los Angeles, concertgoers were prompted to launch the SNKRS app and try and snag a pair of the limited edition shoes well before the activation actually went live.

Once the shoes did go on sale, the user interface for finding and reserving the shoes didn’t work for everyone there — in fact, only one reporter from the group was able to reserve a pair of the shoes (since that reporter hadn’t saved payment information onto the SNKRS app, those shoes were released).

“I can’t get the app to do what I need,” said one concertgoer trying to snag a pair of shoes.

The team at Nike said the concert’s late start caused the miscue. Roughly 30 minutes after the sneakers were supposed to onsale, the activation went live — something journalists were only made aware of when notified by Nike’s public relations team.

Once the sale did go live, the shoes sold out within the first five minutes, although it’s unclear how many were made available through the stash drop (Nike declined to provide a number).

Nike’s repeating the stash drop for shows in Houston, New York, Boston and Chicago.

The SNKRS app is only one example of Nike’s innovative approach to integrating technology and fashion. In April, Nike launched the first sneaker that’s integrated with its NikeConnect technology.

Unveiled earlier this year through a collaboration with the NBA, the NikeConnect app allows users to access information on players and stats through a label enabled with near field communications chips.

Nike’s Air Force Ones enabled with the NikeConnect tech will open a special limited release sneaker sale opportunity called “The Choice”, but Nike has higher hopes for the technology.

“We would love to be able to award sweat equity with access to exclusive products or a partnership,” said a spokesperson for the company in an interview last year.

“NikeConnect [is] a great way for us to get interesting data about our members and deliver unlocks that are relevant to those members,” the spokesperson said.

Beyond the unlocks for exclusive sneaker offers, Nike is thinking about ways to include all of its technology partners in ways that benefit NikeConnect, NikePlus, and SNKRS users.

“We’re excited to learn how unlocks are being received right now,” said the spokesperson. “There is a pretty comprehensive ecosystem of value that we’ve been building for our members… Members who are really active with us are getting rewards or achievements [and] that could include partners like Apple… that we’ll be bringing to the table to round out your whole holistic sport experience.”

 

 


Source: Tech Crunch

Adobe CTO leads company’s broad AI bet

There isn’t a software company out there worth its salt that doesn’t have some kind of artificial intelligence initiative in progress right now. These organizations understand that AI is going to be a game-changer, even if they might not have a full understanding of how that’s going to work just yet.

In March at the Adobe Summit, I sat down with Adobe executive vice president and CTO Abhay Parasnis, and talked about a range of subjects with him including the company’s goal to build a cloud platform for the next decade — and how AI is a big part of that.

Parasnis told me that he has a broad set of responsibilities starting with the typical CTO role of setting the tone for the company’s technology strategy, but it doesn’t stop there by any means. He also is in charge of operational execution for the core cloud platform and all the engineering building out the platform — including AI and Sensei. That includes managing a multi-thousand person engineering team. Finally, he’s in charge of all the digital infrastructure and the IT organization — just a bit on his plate.

Ten years down the road

The company’s transition from selling boxed software to a subscription-based cloud company began in 2013, long before Parasnis came on board. It has been a highly successful one, but Adobe knew it would take more than simply shedding boxed software to survive long-term. When Parasnis arrived, the next step was to rearchitect the base platform in a way that was flexible enough to last for at least a decade — yes, a decade.

“When we first started thinking about the next generation platform, we had to think about what do we want to build for. It’s a massive lift and we have to architect to last a decade,” he said. There’s a huge challenge because so much can change over time, especially right now when technology is shifting so rapidly.

That meant that they had to build in flexibility to allow for these kinds of changes over time, maybe even ones they can’t anticipate just yet. The company certainly sees immersive technology like AR and VR, as well as voice as something they need to start thinking about as a future bet — and their base platform had to be adaptable enough to support that.

Making Sensei of it all

But Adobe also needed to get its ducks in a row around AI. That’s why around 18 months ago, the company made another strategic decision to develop AI as a core part of the new  platform. They saw a lot of companies looking at a more general AI for developers, but they had a different vision, one tightly focussed on Adobe’s core functionality. Parasnis sees this as the key part of the company’s cloud platform strategy. “AI will be the single most transformational force in technology,” he said, adding that Sensei is by far the thing he is spending the most time on.”

Photo: Ron Miller

The company began thinking about the new cloud platform with the larger artificial intelligence goal in mind, building AI-fueled algorithms to handle core platform functionality. Once they refined them for use in-house, the next step was to open up these algorithms to third-party developers to build their own applications using Adobe’s AI tools.

It’s actually a classic software platform play, whether the service involves AI or not. Every cloud company from Box to Salesforce has been exposing their services for years, letting developers take advantage of their expertise so they can concentrate on their core knowledge. They don’t have to worry about building something like storage or security from scratch because they can grab those features from a platform that has built-in expertise  and provides a way to easily incorporate it into applications.

The difference here is that it involves Adobe’s core functions, so it may be intelligent auto cropping and smart tagging in Adobe Experience Manager or AI-fueled visual stock search in Creative Cloud. These are features that are essential to the Adobe software experience, which the company is packaging as an API and delivering to developers to use in their own software.

Whether or not Sensei can be the technology that drives the Adobe cloud platform for the next 10 years, Parasnis and the company at large are very much committed to that vision. We should see more announcements from Adobe in the coming months and years as they build more AI-powered algorithms into the platform and expose them to developers for use in their own software.

Parasnis certainly recognizes this as an ongoing process. “We still have a lot of work to do, but we are off in an extremely good architectural direction, and AI will be a crucial part,” he said.


Source: Tech Crunch

These schools graduate the most funded startup CEOs

There is no degree required to be a CEO of a venture-backed company. But it likely helps to graduate from Harvard, Stanford or one of about a dozen other prominent universities that churn out a high number of top startup executives.

That is the central conclusion from our latest graduation season data crunch. For this exercise, Crunchbase News took a look at top U.S. university affiliations for CEOs of startups that raised $1 million or more in the past year.

In many ways, the findings weren’t too different from what we unearthed almost a year ago, looking at the university backgrounds of funded startup founders. However, there were a few twists. Here are some key findings:

Harvard fares better in its rivalry with Stanford when it comes to educating future CEOs than founders. The two universities essentially tied for first place in the CEO alum ranking. (Stanford was well ahead for founders.)

Business schools are big. While MBA programs may be seeing fewer applicants, the degree remains quite popular among startup CEOs.  At Harvard and the University of Pennsylvania, more than half of the CEOs on our list graduated as business school alum.

University affiliation is influential but not determinative for CEOs. The 20 schools featured on our list graduated CEOs of more than 800 global startups that raised $1M or more in roughly the past year, a minority of the total.
Below, we flesh out the findings in more detail.

Where startup CEOs went to school

First, let’s start with school rankings. There aren’t many big surprises here. Harvard and Stanford far outpace any other institutions on the CEO list. Each counts close to 150 known alum among chief executives of startups that raised $1 million or more over the past year.

MIT, University of Pennsylvania, and Columbia round out the top five. Ivy League schools and large research universities constitute most of the remaining institutions on our list of about twenty with a strong track record for graduating CEOs. The numbers are laid out in the chart below:

Traditional MBA popular with startup CEOs

Yes, Bill Gates and Mark Zuckerberg dropped out of Harvard. And Steve Jobs ditched college after a semester. But they are the exceptions in CEO-land.

The typical path for the leader of a venture-backed company is a bit more staid. Degrees from prestigious universities abound. And MBA degrees, particularly from top-ranked programs, are a pretty popular credential.

Top business schools enroll only a small percentage of students at their respective universities. However, these institutions produce a disproportionately large share of CEOs. Wharton School of Business degrees, for instance, accounted for the majority of CEO alumni from the University of Pennsylvania . Harvard Business School also graduated more than half of the Harvard-affiliated CEOs. And at Northwestern’s Kellogg School of Management, the share was nearly half.

CEO alumni background is really quite varied

While the educational backgrounds of startup CEOs do show a lot of overlap, there is also plenty of room for variance. About 3,000 U.S. startups and nearly 5,000 global startups with listed CEOs raised $1 million or more since last May. In both cases, those startups were largely led by people who didn’t attend a school on the list above.

Admittedly, the math for this is a bit fuzzy. A big chunk of CEO profiles in Crunchbase (probably more than a third) don’t include a university affiliation. Even taking this into account, however, it looks like more than half of the U.S. CEOs were not graduates of schools on the short list. Meanwhile, for non-U.S. CEOs, only a small number attended a school on the list.

So, with that, some words of inspiration for graduates: If your goal is to be a funded startup CEO, the surest path is probably to launch a startup. Degrees matter, but they’re not determinative.


Source: Tech Crunch