Diving into Google Cloud Next and the future of the cloud ecosystem

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller offered up their analysis on the major announcements that came out of Google’s Cloud Next conference this past week, as well as their opinions on the outlook for the company going forward.

Google Cloud announced a series of products, packages and services that it believes will improve the company’s competitive position and differentiate itself from AWS and other peers. Frederic and Ron discuss all of Google’s most promising announcements, including its product for managing hybrid clouds, its new end-to-end AI platform, as well as the company’s heightened effort to improve customer service, communication, and ease-of-use.

“They have all of these AI and machine learning technologies, they have serverless technologies, they have containerization technologies — they have this whole range of technologies.

But it’s very difficult for the average company to take these technologies and know what to do with them, or to have the staff and the expertise to be able to make good use of them. So, the more they do things like this where they package them into products and make them much more accessible to the enterprise at large, the more successful that’s likely going to be because people can see how they can use these.

…Google does have thousands of engineers, and they have very smart people, but not every company does, and that’s the whole idea of the cloud. The cloud is supposed to take this stuff, put it together in such a way that you don’t have to be Google, or you don’t have to be Facebook, you don’t have to be Amazon, and you can take the same technology and put it to use in your company”

Image via Bryce Durbin / TechCrunch

Frederic and Ron dive deeper into how the new offerings may impact Google’s market share in the cloud ecosystem and which verticals represent the best opportunity for Google to win. The two also dig into the future of open source in cloud and how they see customer use cases for cloud infrastructure evolving.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


Source: Tech Crunch

Whither native app developers?

I’ve noticed something interesting lately. Five years ago, senior developers with significant iOS experience available for new work seemed approximately as easy to find as unicorns who also laid golden eggs. Even two years ago, they were awfully hard to unearth. This year, though? Maybe it’s just a random blip — but this year, like the truth, they seem to be out there. And a few things make me suspect it’s not a blip.

App Annie’s “State of Mobile 2019” refers obliquely to “mobile maturity,” i.e. the point at which the number of downloads per year flattens out in a given market. That same report shows that the US is there; the number of app downloads in the US increased a paltry 5% from 2016 to 2018 — though it’s worth noting that app revenue which flowed through the app store increased 70% in that same time.

Meanwhile, the number of apps in the iOS App Store is essentially flat over the last two years — this has been influenced by more stringent approval standards from Apple, yes, but is still noteworthy.

Meanwhile meanwhile, non-native cross-platform development platforms are growing in popularity. “We scanned Microsoft’s iOS and Android apps and discovered that 38 of them, including the likes of Word, Excel, Xbox, and many others, were recently updated to include React Native” reports AppFigures, who add “In the last year use of React Native has nearly doubled.”

I can confirm anecdotally that clients are increasingly interested in building cross-platform apps, or at least simple cross-platform apps, in React Native. I certainly don’t think this is always the right move — I wrote about this decision and its trade-offs for ExtraCrunch a couple of months ago — but It’s certainly a more viable option than Cordova/Ionic, which I’ve had nothing but multiple terrible experiences with over the years. And then there’s the slow but distinct rise of PWAs.

Is the app boom over? Are today’s app experts doomed to become the COBOL programmers of tomorrow? Not so fast. Native development tools and technologies have gotten a lot better in that time, too. (For instance, I’ve never talked to anyone who doesn’t vastly prefer Swift to Objective-C, and while Kotlin is newer, it seems to be on a similar trajectory for Android.) And we’re still seeing consistent growth in a “long tail” of new app development, which, instead of being built for mass consumer or enterprise-wide audiences, are built and iterated for very specific business needs.

But I’d still feel at least slightly uneasy about going all-in as a specialist app developer if I was early in my career. Not because the market’s going to go away … but because, barring some new transcendent technology available only on phones (maybe some AR breakthrough?) the relentless growth and ever-increasing demand of yesteryear is, in mature markets like the US, apparently gone for the foreseeable future. There’s still some growth, but it seems that’s being sopped up by the rise of non-native development.

In short, for the first time since the launch of the App Store it’s possible to at least envision a future in which the demand for native app developers begins to diminish. It’s certainly not the only possible future. This certainly isn’t the conventional wisdom — just ask any of the hordes of Android developers flocking to Google I/O in May, or WWDC in June. But it might be worth building up a backup strategy, just in case.


Source: Tech Crunch

Tesla is raising the price of its full self-driving option

In a few weeks, Tesla buyers will have to pay more for an option that isn’t yet completely functional, but that CEO Elon Musk promises will one day deliver full autonomous driving capabilities.

Musk tweeted Saturday that the price of its full self-driving option will “increase substantially over time” beginning May 1.

Tesla vehicles are not self-driving. Musk has promised that the advanced driver assistance capabilities on Tesla vehicles will continue to improve until eventually reaching that full automation high-water mark.

Musk didn’t provide a specific figure, but in response to a question on Twitter, he said the increase would be “something like” around the $3,000+ figure. Full self-driving currently costs $5,000.

The price hike comes amid several notable changes and events, including an upcoming Investor Autonomy Day on April 22 meant to explain and showcase Tesla’s autonomous driving technology. On Thursday, Tesla announced that Autopilot, its advanced driver assistance system that offers a combination of adaptive cruise control and lane steering, is now a standard feature.

The price of vehicles with the standard Autopilot is higher (although it should be noted that this standard feature is less than the prior cost of the option).  Buyers previously had to pay $3,000 for the option and examples given by Tesla suggest a $500 savings.

Tesla also announced it would begin leasing the Model 3 vehicles.

The more robust version of Autopilot is called Full Self-Driving, or FSD, and currently costs an additional $5,000. FSD includes Summon as well as Navigate on Autopilot, an active guidance system that navigates a car from a highway on-ramp to off-ramp, including interchanges and making lane changes. Once drivers enter a destination into the navigation system, they can enable “Navigate on Autopilot” for that trip.

Tesla continues to improve Navigate on Autopilot and the broader FSD system through over-the-air software updates. The company says on its website that FSD will soon be able to recognize and respond to traffic lights and stop signs and automatically driving on city streets. 

The next major step change is a new custom chip called Hardware 3 that Tesla recently began producing. The Tesla-built piece of hardware is designed to have greater processing power than the Nvidia computer currently in Model S, X, and 3 vehicles.

Musk tweeted Saturday that Tesla will begin swapping the new custom chip into existing vehicles in a few months.

Musk has been promising full self-driving for years now. In late 2016, when Tesla started producing electric vehicles with a more robust suite of sensors, radar and cameras that would allow higher levels of automated driving, it also started taking money from customers for FSD. Musk said at the time, it would become available if and when the technical challenges were conquered and regulatory approvals were met.


Source: Tech Crunch

Get ready for a new era of personalized entertainment

New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.

The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.

Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.

The title contains different experiences for different people.

From smart recommendations to smarter content

When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.

Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.

However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.

That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.

What is smart content?

Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.

We are already seeing the first forerunners in this space. TikTok’s whole content experience is driven by very short videos, audiovisual content sequences if you will, ordered and woven together by algorithms. Every user sees a different, personalized, “whole” based on her viewing history and user profile.

At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.

Some earlier predecessors of interactive audio-visual content include sports event streaming, in which the user can decide which particular stream she follows and how she interacts with the live content, for example rewinding the stream and spotting the key moments based on her own interest.

Simultaneously, we’re seeing how machine learning technologies can be used to create photo-like images of imaginary people, creatures and places. Current systems can recreate and alter entire videos, for example by changing the style, scenery, lighting, environment or central character’s face. Additionally, AI solutions are able to generate music in different genres.

Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.

Personalized smart content is coming to news as well. Automated systems, using today’s state-of-the-art NLP technologies, can generate long pieces of concise, comprehensible and even inventive textual content at scale. At present, media houses use automated content creation systems, or “robot journalists”, to create news material varying from complete articles to audio-visual clips and visualizations. Through content atomization (breaking content into small modular chunks of information) and machine learning, content production can be increased massively to support smart content creation.

Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.

Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.

Automated storytelling?

How is it possible to create smart content that contains different experiences for different people?

Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.

Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.

Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.

Turning Donald Glover into Jay Gatsby

With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.

It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.

Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.

Sustainable smart content

Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.

In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.

If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.

Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.


Smart content is the ultimate combination of user experience design, AI technologies and storytelling.

News media should be among the first to start experimenting with smart content. When the intelligent content starts eating the world, one should be creating ones own intelligent content.

The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.


Source: Tech Crunch

How do startups actually get their content marketing to work?

[Editor’s note: this is a free example of a series of articles we’re publishing by top experts who have cutting-edge startup advice to offer, over on Extra Crunch. Get in touch at ec_columns@techcrunch.com if you have ideas to share.] 

Even the best growth marketers fail to get content marketing to work. Many are unwittingly using tactics from 4 years ago that no longer work today.

This post cuts through the noise by sharing real-world data behind some of the biggest SEO successes this year.

It studies the content marketing performance of clients with Growth Machine and Bell Curve (my company) — two marketing agencies who have helped grow Perfect Keto, Tovala, Framer, Crowd Cow, Imperfect Produce, and over a hundred others.

What content do their clients write about, how do they optimize that content to rank well (SEO), and how do they convert their readers into customers?

You’re about to see how most startups manage their blogs the wrong way.

Reference CupAndLeaf.com as we go along. Their tactics for hitting 150,000 monthly visitors will be explored.

Write fewer, more in-depth articles

In the past, Google wasn’t skilled at identifying and promoting high quality articles. Their algorithms were tricked by low-value, “content farm” posts.

That is no longer the case.

Today, Google is getting close to delivering on its original mission statement: “To organize the world’s information and make it universally accessible and useful.” In other words, they now reliably identify high quality articles. How? By monitoring engagement signals: Google can detect when a visitor hits the Back button in their browser. This signals that the reader quickly bounced from the article after they clicked to read it.

If this occurs frequently for an article, Google ranks that article lower. It deems it low quality.

For example, below is a screenshot of the (old) Google Webmaster Tools interface. It visualizes this quality assessment process: It shows a blog post with the potential to rank for the keyword “design packaging ideas.” Google initially ranked it at position 25.

However, since readers weren’t engaging with the content as time went on, Google incrementally ranked the article lower — until it completely fell off the results page:

The lesson? Your objective is to write high quality articles that keep readers engaged. Almost everything else is noise.

In studying our clients, we’ve identified four rules for writing engaging posts.

1. Write articles for queries that actually prioritize articles.

Not all search queries are best served by articles.

Below, examine the results for “personalized skincare:”

Notice that Google is prioritizing quizzes. Not articles.

So if you don’t perform a check like this before writing an article on “personalized skincare,” there’s a good chance you’re wasting your time. Because, for some queries, Google has begun prioritizing local recommendations, videos, quizzes, or other types of results that aren’t articles.

Sanity check this before you sit down to write.

2. Write titles that accurately depict what readers get from the content.

Are incoming readers looking to buy a product? Then be sure to show them product links.

Or, were they looking for a recipe? Provide that.

Make your content deliver on what your titles imply a reader will see. Otherwise, readers bounce. Google will then notice the accumulating bounces, and you’ll be penalized.

3. Write articles that conclude the searcher’s experience.

Your objective is to be the last site a visitor visits in their search journey.

Meaning, if they read your post then don’t look at other Google result, Google infers that your post gave the searcher what they were looking for. And that’s Google’s prime directive: get searchers to their destination through the shortest path possible.

The two-part trick for concluding the searcher’s journey is to:

Go sufficiently in-depth to cover all the subtopics they could be looking for.

Link to related posts that may cover the tangential topics they seek.

This is what we use Clearscope for — it ensures we don’t miss critical subtopics that help our posts rank:

4. Write in-depth yet concise content.

In 2019, what do most of the top-ranked blogs have in common?

They skip filler introductions, keep their paragraphs short, and get to the point.

And, to make navigation seamless, they employ a “table of contents” experience:

Be like them, and get out of the reader’s way. All our best-performing blogs do this.

Check out more articles by Julian Shapiro over on Extra Crunch, including “What’s the cost of buying users from Facebook and 13 other ad networks?” and “Which types of startups are most often profitable?”

Prioritize engagement over backlinks

In going through our data, the second major learning was about “backlinks”, which is marketing jargon for a link to your site from someone else’s.

Four years ago, the SEO community was focused on backlinks and Domain Ranking (DR) — an indication of how many quality sites link to yours (scored from 0 to 100). At the time, they were right to be concerned about backlinks.

Today, our data reveals that backlinks don’t matter as much as they used to. They certainly help, but you need great content behind them.

Most content marketers haven’t caught up to this.

Here’s a screenshot showing how small publishers can beat out large behemoths today — with very little Domain Ranking:

The implication is that, even without backlinks, Google is still happy to rank you highly. Consider this: They don’t need your site to be linked from TechCrunch for their algorithm to determine whether visitors are engaged on your site.

Remember: Google has Google Analytics, Google Search, Google Ads, and Google Chrome data to monitor how searchers engage with your site. Believe me, if they want to find out whether your content is engaging, they can find a way. They don’t need backlinks to tell them.

This is not to say that backlinks are useless.

Our data shows they still provide value, just much less. Notably, they get your pages “considered” by Google sooner: If you have backlinks from authoritative and relevant sites, Google will have the confidence to send test traffic to your pages in perhaps a few weeks instead of in a few months.

Here’s what I mean by “test traffic:” In the weeks after publishing your post, Google notices them then experimentally surfaces them at the top of related search terms. They then monitor whether searchers engage with the content (i.e. don’t quickly hit their Back button). If the engagement is engaging, they’ll increasingly surface your articles. And increase your rankings over time.

Having good backlinks can cut this process down from months to a few weeks.

Prioritize conversion over volume

Engagement isn’t your end goal. It’s the precursor to what ultimately matters: getting a signup, subscribe, or purchase. (Marketers call this your “conversion event.”) Visitors can take a few paths to your conversion event:

Short: They read the initial post then immediately convert.

Medium: They read the initial post plus a few more before eventually converting.

Long (most common): They subscribe to your newsletter and/or return later.

To increase the ratio at which readers take the short and medium paths, optimize your blog posts’ copy, design, and calls to action. We’ve identified two rules for doing this.

1. Naturally segue to your pitch

Our data shows you should not pitch your product until the back half of your post.

Why? Pitching yourself in the intro can taint the authenticity of your article.

Also, the further a reader gets into a good article, the more familiarity and trust they’ll accrue for your brand, which means they’re less likely to ignore your pitch once they encounter it.

2. Don’t make your pitch look like an ad

Most blogs make their product pitches look like big, show-stopping banner ads.

Our data shows this visual fanfare is reflexively ignored by readers.

Instead, plug your product using a normal text link — styled no differently than any other link in your post. Woodpath, a health blog with Amazon products to pitch, does this well.

Think in funnels, not in pageviews

Finally, our best-performing clients focus less on their Google Analytics data and more on their readers’ full journeys: They encourage readers to provide their email so they can follow up with a series of “drip” emails. Ideally, these build trust in the brand and get visitors to eventually convert.

They “retarget” readers with ads. This entails pitching them with ads for the products that are most relevant to the topics they read on the blog. (Facebook and Instagram provide the granular control necessary to segment traffic like this.) You can read my growth marketing handbook to learn more about running retargeting ads well.

Here’s why retargeting is high-leverage: In running Facebook and Instagram ads for over a hundred startups, we’ve found that the cost of a retargeting purchase is one third the cost of a purchase from ads shown to people who haven’t yet been to our site.

Our data shows that clients who earn nothing from their blog traffic can sometimes earn thousands by simply retargeting ads to their readers.

Recap

It’s possible for a blog with 50,000 monthly visitors to earn nothing.

So, prioritize visitor engagement over volume: Make your hero metrics your revenue per visitor and your total revenue. That’ll keep your eye on the intermediary goals that matter: Attracting visitors with an intent to convert

Keeping those visitors engaged on the site

Then compelling them to convert

In short, your goal is to help Google do its job: Get readers where they need to go with the least amount of friction in their way.

Be sure to check out more articles from Julian Shapiro over on Extra Crunch, and get in touch with the Extra Crunch editors if you have cutting-edge startup advice to share with our subscribers, at ec_columns@techcrunch.com.


Source: Tech Crunch

Equity transcribed: Digging into the Uber S-1

Welcome back to this week’s transcribed edition of Equity, TechCrunch’s venture capital-focused podcast that unpacks the numbers behind the headlines.

And because it’s another week, why not another emergency episode? This time Kate Clark and Alex Wilhelm popped in the studio an hour before they were due to record the regular episode in order to dig into the Uber S-1. Not only did they dig into it, but they did so in real-time. That’s what happens when you only have 10 minutes to get through almost 300 pages of numbers. And if it’s numbers you like, this is the episode for you.

The duo talks Uber’s profits and losses and provides context into it all. And just to prove just how juicy this ep is, Equity Shots tend to be about 15 minutes long. Not this one. There was a lot to get to. And who better to lead the conversation than Kate and Alex? So join them as they walk you through what the Uber S-1 holds.

For access to the full transcription, become a member of Extra Crunch. Learn more and try it for free. 


Kate Clark: Hello and welcome to Equity Shot. This is TechCrunch’s Kate Clark, and I’m joined today by Alex Wilhelm of Crunchbase News.

Alex Wilhelm: Hello.

Kate: We are going to tackle some breaking news. But, a warning from Alex first.

Alex: Yeah, so it’s 2:09pm here on the West Coast on Thursday, which means that the S-1 dropped, I don’t know, about 45 minutes ago, maybe an hour. And there was a lot to do before the show, but we wanna get this out as soon as we can, so we did our note dock by hand, and we got the S-1 pulled up, and we have a lot to go through. But, there may be an awkward pause in this, because we don’t have every single number pulled out ahead of time.

Kate: We are literally scrolling through the document live. We have a piece of paper taped to the wall in the studio with a very rough outline of what we’re gonna talk about. And we agreed that we’re going to try to take it slow and carry you guys through these important numbers as best we can.

Alex: Yes, and we are gonna start with yearly numbers to stay at the highest possible level, and we’re gonna talk about revenue first.

Alex: Now, keep in mind that we’re not talking about bookings, which is the total spend on Uber’s platform, we’re gonna talk about revenue, which is Uber’s portion of that overall platform spend. So, in 2014, because the S-1 goes back all the way to 2014, Uber had revenue of 495 million. That nearly quadrupled in 2015 to 1.99 billion … call it 2 billion flat. In 2016 that grew to 3.85 billion. It expanded to 7.9 billion in 2017, and 11.3 billion in 2018. So, basically a half a billion, to 11.3 billion from 2014 to 2018.

Kate: Yeah, quick reminder, a lot of these we’ve seen. I know there’s been plenty of reports highlighting Uber’s 2018 revenues of around 11 billion, but this is the first time we’re getting a full glimpse into financial history all the way back to 2014, and then also losses, which were interesting.

 

Alex: Very, very interesting.

Kate: I’ll quickly run through losses beginning in 2014. So, Uber lost 670 million that year, they were not profitable. The next year they lost 2.7 billion, again, not profitable. The next year they lost 370 million, guessing there was a big … oh, no, that was the year of the divestiture of … we just talked about this.

Alex: Uber China.


Source: Tech Crunch

Disney/Lucasfilm donates $1.5 million to FIRST

A day after the big Episode IX reveal, Disney and subsidiary Lucas film announced that it will be donating $1.5 million to FIRST . The non-profit group was founded by Dean Kamen in 1989 to help teach STEM through initiatives like robotics competitions.

Disney’s money will go to provide education and outreach to the underserved communities on which FIRST focuses. Details are pretty thin on precisely what the partnership will entail, but Disney’s certainly got a lot to gain from this sort of outreach — and Lucasfilm knows a thing or two about robots.

The Star Wars: Force for Change announcement was made in conjunction with Lucasfilm’s annual Star Wars Celebration in Chicago. Yesterday the event hosted a panel with the cast of the upcoming film that included a teaser trailer and title reveal.

“Star Wars has always inspired young people to look past what is and imagine a world beyond,” Lucasfilm president Kathleen Kennedy said in a release tied to the news. “It is crucial that we pass on the importance of science and technology to young people—they will be the ones who will have to confront the global challenges that lie ahead. To support this effort, Lucasfilm and Disney are teaming up with FIRST to bring learning opportunities and mentorship to the next generation of innovators.”

It’s been a good week for FIRST investments. Just yesterday Amazon announced its own commitment to the group’s robotics offerings.


Source: Tech Crunch

Unicorns, undercorns and horses: A guide to the nonsense

It’s been more than a half-decade since Aileen Lee of Cowboy Ventures kicked off the unicorn craze. Noting in a well-read post for TechCrunch that an interesting cohort of private companies worth a billion dollars or more was worth examining, the post brought the “unicorn” into its current usage inside of tech.

And then tech itself did the term a favor, building and financing hundreds more. Now unicorns swarm like fleas, and simply snagging a $1 billion valuation these days is something that has been done in mere months and is a well-known vanity tactic used to juice hiring.

Anyway.

This has now gone on so long that many of us in the tech-focused journalism space are sick of saying the word. Kate Clark, Equity co-host and cool person, literally has “I am so sick of the buzz word [sic] ‘unicorn’ ” on her Twitter page. I agree with the sentiment.

But the phrase unicorn is back in the mix, so let’s examine the hubbub.

Booms and busts

The term unicorn quickly became overused as startups stayed private longer by pushing IPOs off as long as they could, and the capital world decided it was fine. Bored capital was pooling in venture coffers where it was itching to be disbursed by the wealthy into the holsters of the privileged. And thus the companies that in other cycles might have gone public simply didn’t, and the ranks of unicorns multiplied.

The joke’s on us, however, as we have used the term on the order of six billion times.

Soon the overused “unicorn” moniker was also too small. Decacorns took their own spot in the pantheon of silly names. A decacorn, in case you’ve led a more exciting life than me and are thus otherwise unfamiliar, is a private tech company that has racked up a $10 billion valuation. (A centacorn, I suppose, would be worth $100 billion?)

What a unicorn is has stretched and bent over time. But regardless of how the phrase has come to be defined in recent quarters, most people are talking about tech shops when they use it. And that’s pretty reasonable.

But what tech companies do very well is go up, and go down. And that’s when we wind up on the other side (tail-end?) of the unicorn debate: All are agreed that the phrase unicorn is useful. Not all, however, agree on what we call a unicorn that has fallen.

Oops

We have two questions: What do you call a unicorn that falls under the $1 billion valuation mark. And, relatedly, what do you call a unicorn that eventually goes public or otherwise exits at a discount to its final private market valuation?

Regarding the leading question, there are two definitions that I am aware of.

First, as has come back into the discussion this week, there’s the concept of an “undercorn.” As Business Insider noted through a blog citationAxios’ Dan Primack may have coined the term. Here, per Ian Sigalow’s post, which quotes the original Dan, is what Primack said:

When a venture-backed company breaks through the $1BN valuation mark, we call it a Unicorn. When the same company falls back below the $1BN threshold, it becomes an Undercorn.

That’s simple enough. However, Erin Griffith of The New York Times used the phrase recently in a slightly different manner. Here’s her riff:

Unicorns that sell or go public below their last private valuation are known as undercorns.

That’s different, as it’s defining undercorns as exited unicorns that lose altitude; that’s different than unicorns losing their unicornyness altogether. However, as we’re working on defining made-up words to describe an economic anomaly caused by government-determined free money, we can relax a little and realize that both uses of the word undercorn are equally differentiated from zero.

Now I get to talk about myself. I had my own thoughts on what a unicorn that had lost the requisite billion-dollar valuation should be called back in 2016. Regarding what a unicorn that had fallen under the needed worth:

If a unicorn is a horse with a spike, when you take the spike off you just have a horse.

I thought it was pretty smart. No one else agreed, and thus I have to admit that Primack and Griffith have made quite a lot more noise with the undercorn phrase, even if they don’t quite agree on what it means. (Feel free to become a partisan of either side, as we are long overdue for something useless and entertaining on the internet.)

Sadly, there are even more unicorn-related terms and phrases in and amidst the tech conversation that we shouldn’t miss.

Exotica and other notes

Returning to Axios, it has a new phrase out this year that’s worth keeping in our hat. From its February coverage of the venture landscape, I give you the phrase “minotaur:”

The Big Picture: Meet the minotaurs — our term for the companies that would be worth more than $1 billion even if the only thing they did was to take the cash that they have raised and put it in a checking account.

I wanted to hate this, but wound up deciding there are a host of worse words that could have been selected. And as it wasn’t a unicorn-variant, how could I complain?

The only other thing I can recall that fits our task today is something that Jason and I wrote four years ago in TechCrunch. As a follow-up to our “How To Speak Startup” post, we wrote the brilliantly titled “How To Speak Startup, Part Deux,” which contained the following definition:

Unicorn — As if metaphors in Silicon Valley couldn’t get more childish.

The joke’s on us, however, as we have used the term on the order of six billion times since then. And that’s that, I think. Now you know!


Source: Tech Crunch

Niantic EC-1, Part 3 and what the data show are the best fundraising decks

Harry Potter, the Platform, and the Future of Niantic

After deep dives into the story of Niantic’s spinout from Google and its creation and development of Pokémon GO, TechCrunch editor Greg Kumparak turns his attention to Niantic’s future, looking at how Harry Potter: Wizards Unite is not just uniting wand-wielders, but also the company’s ambitions in areas as diverse as 5G, China, 3D mapping, and the next-generation of augmented reality.

This is definitely a weekend read (it’s about 25 mins in length), but here’s a taste:

There’s one more piece to this grander AR vision, and it’s perhaps the biggest and most challenging one.

Your phone knows your location, but current GPS tech is really only accurate within a few feet. Even when it’s at its most accurate, it doesn’t always stay there for long. Ever use Google Maps in a big city and had your marker hop around all over the map? That’s probably from the signals bouncing off buildings, vehicles, and all of the myriad metal things around you.

That’s good enough for basic augmented reality functionality seen in Pokémon GO today. But Niantic wants to get closer and closer to the vision of GO’s original trailer, where hundreds of people can look up to see the same Zapdos flying overhead, synchronized in time and space across all of their devices. Where you can gather in a park with friends to watch massive Pokémon battles play out in real time, or leave a virtual gift on a bench for a friend to walk up to and discover. For this, Niantic will need something more precise and more consistent. Like pretty much everything with Niantic, it all goes back to maps.

More specifically, they’ll need to build a 3D map of the environments where people are playing. It’s easy enough to get relatively accurate 3D data about huge things like buildings, but what about everything around those buildings? The statues, the planters, the trees, the bus stops. John [Hanke, Niantic’s CEO], and others in the space, refer to this map as the “AR Cloud.”


Source: Tech Crunch

Matt Cutts on solving big problems with lean solutions at the US Digital Service

Updating the federal government’s digital infrastructure seems like a Herculean task akin to cleaning out the Augean stables. Where do you even start shoveling? Former Googler and current head of the U.S. Digital Service Matt Cutts says it’s not quite that hard — but he’s had to leave his Silicon Valley startup outlook at the door.

“In the Valley and San Francisco, they’re geared to move fast and break things. And that’s fantastic to explore a space,” Cutts told me in an interview. “But the government has to move purposefully and fix things. It’s more about finding the right decision, achieving consensus, creating good communication.”

The USDS is a small (and actively recruiting) department that takes on creaking interfaces and tangled databases of services for, say, veteran benefit management or immigration documentation, buffing them to a shiny finish that may save their users months of literal paperwork.

Some notes from the USDS’s work on modernizing the Medicare payment system.

Recently, for instance, the USDS overhauled VA.gov, which is how many veterans access things like benefits, make medical appointments, and so on. But until recently it was kind of a mess of interconnected sub-sites and instructional PDFs. USDS interviewed a couple thousand vets and remade the site with a single login, putting the most-used services right on the front page. Seems obvious, but the inertia of these systems is considerable.

“Oftentimes we build a front end and it still talks to an abysmal, or maybe antiquarian, system in the back end,” Cutts said. “VA.gov required a special version of Internet Explorer!”

There are always paper alternatives, but those can be so slow and clunky that they might take three or four months to complete, and can be so complex that people will hire a lawyer to do them rather than risk further delay. These are ostensibly free and open services available to all vets — but they weren’t in practice. And there were accessibility problems all over the place, Cutts noted, which is especially troubling with a disability-heavy population like veterans.

These projects are often short-term, putting modern web and backend standards to work and handing the results off to the agency or department that requested it. The USDS isn’t built for long-term support but acts as a strike team putting smart solutions in place that may seem obvious in startup culture but haven’t yet become standard operating procedure in the capitol.

The work they do is guided by impact, not politics, which is likely part of the reason they’ve managed to avoid interference by the Trump administration, which has treated many other Obama-era initiatives like pests to be exterminated. Yet the nature of the work is in a way fundamentally progressive, in that it is about bottom-up accessibility and helping under-represented or unprivileged groups.

For instance, they’ve been hard at work on immigration issues that would expedite both asylum seekers and seasonal farm workers at the Mexican border. That’s a political live wire right now, even if the decision to do it was strictly based on helping a large population frustrated by outdated digital tools.

The new farmers.gov, built for the Department of Agriculture, vastly streamlines the H-2A visa application process, centralizing documentation and services that were previously spread across several other major departments and websites. That’s unarguably a good thing, but like anything relating to immigration and foreign labor it is possible it could get swept up in the partisan twister. Fortunately that doesn’t seem to have happened.

The new, improved and simplified farmers.gov provides app integration and straightforward design.

“The fact is we get good support,” Cutts said when I asked him about the current political environment. It may not be loud in that support, but quiet actions like appointing former USDS officials and engineers to important roles within administration are common, he said.

There are plenty of other programs looking to modernize federal as well as state systems, he pointed out; it’s a rising tide and it’s lifting a lot of boats.

“I signed up for a three month tour, and that was three years ago,” he said. “It’s really a whole civic tech movement here, there are a ton of people sort of holding hands and working together. There’s also stuff happening at the state and local level, at the international level, from the UK to Estonia and Singapore — everyone’s starting to realize this matters.”

Recruitment, however, is more difficult than he’d like, perhaps partly because of self-imposed hiring practices made to reflect the diversity of the country.

“We get the best results when we represent all of America,” said Cutts. “So I go to Microsoft and Ann Arbor, regular events, but also like, Lesbians Who Tech or Grace Hopper Fest.”

Still, startups and big tech companies regularly poach talent or otherwise lure them away. “They’re just better at recruiting,” he said. And there’s some kind of fundamental disconnect at work, too, perhaps the comfortable contempt many young people have for the government — but he suggested that those seeking to do good might want to do a more serious evaluation of the tech landscape.

“I joined Google because I wanted to make the world a better place,” he said. “But if you look at the #metoo movement, how the tech industry has been acting lately… everyone at those companies has to ask that question, am I really having that impact?”

If you’re not sure, you might consider doing a tour at the USDS. They’re launching products and helping people just like startups aim to do, but they’re beholden to ordinary citizens in need, not investors. That sounds like a step up to me.


Source: Tech Crunch