Redpoint Ventures is raising another $400M to invest in Chinese companies

Redpoint Ventures is doubling down on China. The firm, headquartered in Menlo Park, has filed documents with the U.S. Securities and Exchange Commission to raise $400 million across two new China-focused funds.

The firm has set a $300 million target for its second flagship China fund, a significant increase from the $180 million it garnered for its debut China fund in 2016. Redpoint is also raising a $100 million opportunity fund that will also focus on the Chinese tech startup market.

Redpoint launched its dedicated China fund, led by managing director David Yuan and partners Tony Wu and Reggie Zhang, in 2016. Wu isn’t listed on the latest filings and may have taken a step back from the China team. We’ve reached out to Redpoint for additional details.

Investing at the seed, early- and growth-stages, Redpoint’s portfolio includes Stripe, Snowflake and Brandless. Its China fund has deployed capital to Yixia, a video blogging platform valued at more than $3 billion; Renrenche.com, an online marketplace for used cars; and iDreamSky, a Chinese game distributor that recently debuted on the Hong Kong Stock Exchange.

Following a banner year for venture capital fundraising wherein firms brought in $55.5 billion across 256 vehicles, per PitchBook, VCs are already off to a strong start in 2019. This week, Resolute Ventures, an early-stage firm based in San Francisco and Boston, closed its fourth fund on $75 million, and Silicon Valley-based BlueRun Ventures nabbed $130 million for its sixth flagship fund. Earlier this month, Lightspeed Venture Partners announced $560 million in capital commitments for its fourth China fund.


Source: Tech Crunch

Amazon shareholders want the company to stop selling facial recognition to law enforcement

Amazon shareholders are demanding the company stop selling Rekognition, the company’s facial recognition software, to law enforcement. Unless the board of directors determines the technology “does not cause or contribute to actual or potential violations of civil and human rights,” shareholders want Amazon to stop selling the software to government agencies.

Rekognition, which is part of Amazon Web Services, has the ability to conduct image and video analyses of faces. The technology can identify and track people, as well as their emotions. Amazon has reportedly sold Rekognition to law enforcement agencies in at least two states. Amazon has also reportedly pitched this software to the U.S. Immigration and Customs Enforcement.

Last May, the American Civil Liberties Union of Northern California shed some light on Rekognition, saying it had obtained documents that raise “profound civil liberties and civil rights concerns.” In one test, the ACLU found Rekognition wrongly identified 28 members of Congress, disproportionately confusing Congress members of color as people criminals.

This resolution, organized by non-profit organization Open MIC, represents a group of shareholders that represent a total of $1.32 billion in assets under management.

“It’s a familiar pattern: a leading tech company marketing what is hailed as breakthrough technology without understanding or assessing the many real and potential harms of that product,” Open MIC Executive Director Michael Connor wrote in a blog post. “Sales of Rekognition to government represent considerable risk for the company and investors. That’s why it’s imperative those sales be halted immediately.”

Shareholders intend for this resolution to be voted on in Amazon’s annual meeting this spring. TechCrunch has reached out to Amazon and will update this story if we hear back.


Source: Tech Crunch

‘Star Wars’ returns: Trump calls for space-based missile defense

The President has announced that the Defense Department will pursue a space-based missile defense system reminiscent of the one proposed by Reagan in 1983. As with Reagan’s ultimately abortive effort, the technology doesn’t actually exist yet and may not for years to come — but it certainly holds more promise now than 30 years ago.

In a speech at the Pentagon reported by the Associated Press, Trump explained that a new missile defense system would “detect and destroy any missile launched against the United States anywhere, any time, any place.”

“My upcoming budget will invest in a space-based missile defense layer. It’s new technology. It’s ultimately going to be a very, very big part of our defense, and obviously our offense,” he said. The nature of this “new technology” is not entirely clear, as none was named or ordered to be tested or deployed.

Lest anyone think that this is merely one of the President’s flights of fancy, he is in fact simply voicing the conclusions of the Defense Department’s 2019 Missile Defense Review, a major report that examines the state of the missile threat against the U.S. and what countermeasures might be taken.

It reads in part:

As rogue state missile arsenals develop, space will play a particularly important role in support of missile defense.

Russia and China are developing advanced cruise missiles and hypersonic missile capabilities that can travel at exceptional speeds with unpredictable flight paths that challenge existing defensive systems.

The exploitation of space provides a missile defense posture that is more effective, resilient and adaptable to known and unanticipated threats… DoD will undertake a new and near-term examination of the concepts and technology for space-based defenses to assess the technological and operational potential of space-basing in the evolving security environment.

The President’s contribution seems to largely have been to eliminate the mention of the nation-states directly referenced (and independently assessed at length) in the report, and to suggest the technology is ready to deploy. In fact all the Pentagon is ready to do is begin research into the feasibility of the such a system or systems.

No doubt space-based sensors are well on their way; we already have near-constant imaging of the globe (companies like Planet have made it their mission), and the number and capabilities of such satellites are only increasing.

Space-based tech has evolved considerably over the many years since the much-derided “Star Wars” proposals, but some of them are still as unrealistic as they were then. However as the Pentagon report points out, the only way to know for sure is to conduct a serious study of the possibilities, and that’s what this plan calls for. All the same it may be best for Trump not to repeat Reagan’s mistake of making promises he can’t keep.


Source: Tech Crunch

Dolby quietly preps augmented audio recorder app “234″

Dolby is secretly building a mobile music production app it hopes will seduce SoundCloud rappers and other musicians. Codenamed “234” and formerly tested under the name Dolby Live, the free app measures background noise before you record and then nullifies it. Users can also buy “packs” of audio effects to augment their sounds with EQs settings like “Amped, Bright, Lyric, Thump, Deep, or Natural”. Recordings can then be exported, shared to Dolby’s own audio social network, or uploaded directly to SoundCloud through a built-in integration.

You could call it VSCO or Instagram for SoundCloud.

234 is Dolby Labs’ first big entrance into the world of social apps that could give it more face time with consumers than its core business of integrating audio technology into devices by other manufacturers. Using 234 to convince musicians that Dolby is an expert at audio quality could get them buying more of those speakers and headphones. And by selling audio effect packs, the app could earn the company money directly while making the world of mobile music sound better.

Dolby has been covertly testing Dolby Live/234 since at least June. A source tipped us off to the app and while the company hasn’t formally announced it, there is a website for signing up to test Dolby 234. Dolby PR refused to comment on the forthcoming app. But 234’s sign-up site advertises it saying “How can music recorded on a phone sound so good? Dolby 234 automatically cleans up the sound, gives it tone and space, and finds the ideal loudness. it’s like having your own producer in your phone.”

Those with access to the Dolby 234 app can quickly record audio or audio/video clips with optional background noise cancelling. Free sound editing tools including trimming, loudness boost, and bass and treble controls. Users can get a seven-day free trial of the Dolby’s “Essentials” pack of EQ presets like ‘Bright’ before having to pay, though the pack was free in the beta version so we’re not sure how much it will cost. The “Tracks” tab lets you edit or share any of the clips you’ve recorded.

Overall, the app is polished and intuitive with a lively feel thanks to the Instagram logo-style purple/orange gradient color scheme. The audio effects have a powerful impact on the sound without being gimmicky or overbearing. There’s plenty of room for additional features, though, like multi-tracking, a metronome, or built-in drum beats.

For musicians posting mobile clips to Instagram or other social apps, 234 could make them sound way better without much work. There’s also a huge opportunity for Dolby to court podcasters and other non-music audio creators. I’d love a way to turn effects on and off mid-recording so I could add the feeling of an intimate whisper or echoey ampitheater to emphasize certain words or phrases.

Given how different 234 is from Dolby’s traditional back-end sound processing technologies, it’s done a solid job with design and the app could still get more bells and whistles before an official launch. It’s a creative move for the brand and one that recognizes the seismic shifts facing audio production and distribution. As always-in earbuds like Apple’s AirPods and voice interfaces like Alexa proliferate, short-form audio content will become more accessible and popular. Dolby could spare the world from having to suffer through amazing creators muffled by crappy recordings.


Source: Tech Crunch

NPR turns comedy game show ‘Wait, Wait Don’t Tell Me!’ into an Alexa and Google voice app

NPR is turning its popular game show program “Wait, Wait…Don’t Tell Me!” into a voice application for smart speakers, including both Amazon Alexa and Google Assistant-powered devices. The new app lets listeners play along at home by answering the fill-in-the-blank questions from this week’s news – just like the players do on the NPR podcast and radio show, that’s today aired on more than 720 NPR Member stations.

Also like the NPR program, the new smart speaker game includes the voice talent of the comedy quiz show’s hosts, Peter Sagal and Bill Kurtis.

To get started, you just say either “Alexa, open Wait Wait Quiz” or “Hey Google, talk to the Wait Wait Quiz,” depending on your device.

After hearing the question, you can then speak – or shout – your answer at your smart speaker to find out if you got it right.

The game is five minutes long and updated every week, NPR says.

In addition to bragging rights around your home if you win, game players get to compete for an offbeat prize – the chance to have the show’s talent personalize their voicemail, as well as hear their name announced on the air.

The new game was developed in collaboration with VaynerMedia’s internet-of-things-division, VaynerSmart, NPR notes.

It’s not NPR’s first foray into the smart speaker market, but it is its first game.

To date, NPR’s other voice apps have included news briefings, like Up First, News Now, and Story of the Day (plus its variations like World Story of the Day; Business Story of the Day). NPR also offers a live radio app and its NPR One app, as well as dedicated apps for its Planet Money program.

NPR’s continual expansion into smart speakers has to do with the growing popularity of these devices. Its own Smart Audio Report says that 53 million people (or 21% of the adult population) now own one of these devices, and it wants its content there to reach them.

 

 


Source: Tech Crunch

Sources: Email security company Tessian is closing in on a $40M round led by Sequoia Capital

Continuing a trend that VCs here in London tell me is seeing an increasing amount of deal-flow in Europe attract the interest of top-tier Silicon Valley venture capital firms, TechCrunch has learned that email security provider Tessian is the latest to raise from across the pond.

According to multiple sources, the London-based company has closed a Series B round led by Sequoia Capital. I understand that the deal could be announced within a matter of weeks, and that the round size is in the region of $40 million. Tessian declined to comment.

Founded in 2013 by three engineering graduates from Imperial College — Tim Sadler, Tom Adams and Ed Bishop — Tessian is deploying machine learning to improve email security. Once installed on a company’s email systems, the machine learning tech analyses an enterprise’s email networks to understand normal and abnormal email sending patterns and behaviours.

Tessian then attempts to detect anomalies in outgoing emails and warns users about potential mistakes, such as a wrongly intended recipient, or nefarious employee activity, before an email is sent. More recently, the startup has begun addressing in-bound email, too. This includes preventing phishing attempts or spotting when emails have been spoofed.

Meanwhile, Tessian (formerly called CheckRecipient) raised $13 million in Series A funding just 7 months ago in a round led by London’s Balderton Capital. The company’s other investors include Accel, Amadeus Capital Partners, Crane, LocalGlobe, Winton Ventures, and Walking Ventures.


Source: Tech Crunch

Behold, Slack’s new logo

New year, new you, new Slack. The popular workplace chat service’s resolution clearly involved a bit of a facelift, starting with a new logo. A redesigned version of the familiar grid logo launched this week, and appears to have rolled out on most major platforms.

Slack did the customary thing of explaining the hell out of the new design over of its blog. There’s all of the usual stuff there, about maintaining the spirit while moderning thing up a bit. The company also calls the design “simpler,” which is certainly up for debate. That’s fair enough from the standpoint of the color scheme, but try drawing this one from memory. It’s considerably tougher that the old tic-tac-toe version.

 

The new logo does away with the tilted hashtag/pound symbol of overlapping translucent colors in favor of a symmetrical arrangement of rounded rectangles and pins. The multiplying colors have been pared down to four (light blue, magenta, green and yellow) and the whole effect is reminiscent of a video game console or hospital.

“It uses a simpler color palette and, we believe, is more refined, but still contains the spirit of the original,” the company writes. “It’s an evolution, and one that can scale easily, and work better, in many more places.”

Created by Michael Bierut at the New York firm, Pentagram Design, the new logo marks the first major redesign since the company was launched (in fact, the original apparently predates Slack’s official launch). “The updated palette features four primary colors, more manageable than the original’s eleven, which suffered against any background color other than white,” the firm writes in its own post. “These have been optimized to look better on screen, and the identity also retains Slack’s distinctive aubergine purple as an accent color.”

The new design does potentially open up another issue:

Unintentional, obviously, and the orientation of the above negative space addition is the ancient symbol that was later mirrored and coopted by the worst people, ever. As a number of designers have noted, well, these things can happen, though the association and “once you’ve seen it, you can’t unsee it” effect could eventually prove the new logo’s ultimate undoing.


Source: Tech Crunch

Lance Amstrong just wrote his first check as VC

Lance Armstrong revealed last month that an early investment in Uber — courtesy of a $100,000 check that he funneled into the company in 2009 through Lowercase Capital —  “saved” his family from financial ruin. This was after evidence surfaced in 2012 that he used performance-enhancing drugs, and he was stripped not only of his seven consecutive Tour de France titles but also lost the many lucrative endorsement deals he enjoyed at the time.

Armstrong, talking with CNBC in December, declined to say how big a return that Uber investment has produced, but it seemingly gave him a taste for the riches that venture capital can produce when the stars align. To wit, Armstrong just founded his own venture fund, Next Ventures, to back startups in the sports, fitness, nutrition and wellness markets, and it today announced is first investment.

That portfolio company: Carlsbad, Ca.-based PowerDot, a 2.5-year-old maker of an app-based, smart muscle stimulation device that sends electrical pulses to contract tender soft tissue, helping runners and other athletes recover from their workouts.

We weren’t able to talk with Armstrong — a public relations spokesperson for the firm said he isn’t prepared to speak in detail about it yet — but last month, he spoke candidly about his past actions continuing to haunt him, including years of lying to the public and race organizers, as well as his “bullying,” which he called “terrible,” adding: “It was the way I acted; that was my undoing.”

In fact, Armstrong, who has been banned from cycling from life, said that as he has begun reaching out for meetings, not everyone is eager to take his calls. As he told CNBC’s Andrew Ross Sorkin, “You have to assume that’s what they’re thinking: ‘I don’t want this association; I don’t trust this guy.”

Armstrong seems to be getting by in the meantime. Just this week, Architectural Digest took readers on a tour through Armstrong’s contemporary Aspen home and his art collection. Armstrong purchased the 6,000-square-foot home a decade ago. He and his family now live in Colorado full-time.


Source: Tech Crunch

Robots learn to grab and scramble with new levels of agility

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.


Source: Tech Crunch

Apple reportedly looking to subsidize Watch with Medicare plans

If nothing else, the addition of ECG/EKG reinforced Apple’s commitment to evolving the Watch into a serious medical device. The company has long looked to bring its best-selling wearable to various health insurance platforms, and, according to a new report, it’s reaching out to multiple private Medicare plans in hopes of subsidizing the product.

If Medicare companies bite, the move would make the $279+ tracker much more successful for older users. Along with electrocardiograph functionality, last year’s Series 4 also features fall detection, an addition that could make it even more appealing to the elderly and healthcare providers.

The new report cites at least three providers that have been in discussions with the company. We’ve reached out to Apple for comment, but I wouldn’t hold my breath on hearing back until the ink is dry on those deals. For Apple, however, such a partnership would help increase the target audience for a product that’s been a rare bright spot in the wearable category.

Apple’s not alone in the serious health push, of course. Fitbit has also been aggressively pursuing the space. Today the company announced its inclusion in the National Institutes of Health’s new All of Us health initiative.


Source: Tech Crunch