Lightning Labs just raised millions from Jack Dorsey and others to supercharge blockchain transactions

Lightning Labs, a young, Bay Area-based startup, is trying to make it easier for users to send bitcoin and litecoin to each other without the costly and time consuming process of settling their transactions on the blockchain.

It has investors excited about its work, too. The company is announcing today that it has raised $2.5 million in seed funding to date from a kind of list of big names in payments and beyond, including Square and Twitter cofounder Jack Dorsey, Square exec Jacqueline Reses, serial-founder-turned investor David Sacks, Litecoin creator Charlie Lee, Eventbrite cofounder Kevin Hartz, BitGo CTO Ben Davenport, and Robinhood cofounder Vlad Tenev, along with The Hive, Digital Currency Group, and others.

In an enthusiastic tweet earlier today, Sacks characterized the company as “one of the most important projects in crypto overall.”

Why and how does it work exactly? For starters, Lightning Labs works off Lightning Network, a protocol that’s sometimes called the second layer of bitcoin. (Think of it a little like HTTP.) Boosters of this newer layer, including Lightning Labs, see it as a way to exponentially boost the number and speed of transactions of the bitcoin blockchain without increasing the size of blocks — batches of transactions that are confirmed and subsequently shared on bitcoin’s public ledger.

It’s all a little confusing to people still trying to get a handle on how the blockchain works (including yours truly), but Lightning Labs essentially aims to let two or more people — and eventually machines — create instant, high-volume transactions that still use the underlying blockchain for security.  How: they assign funds on the blockchain into an entry that requires both to sign off on what they plan to spend. Say this is $20. After that transaction is recorded, they can transact that amount of money between each other as many times as they want. If they want to change the amount of that spend, they just update the entry on the blockchain.

If you’re wondering whether there’s room for grift if these transactions move further from the blockchain, so were we. But one of the core tenants of Lightning Labs is that it allows you to do away with counter-party risk. You don’t have to trust someone you are transacting with because — ostensibly, anyway — no one can steal your cryptocurrency.

First, a so-called cryptographic “proof” is created when users initially broadcast that first transaction (and updated versions of it) to the blockchain and that proof ensures that if one party tries to steal from another, not only will be incapable of doing so but as a penalty for trying, the thief’s currency will be awarded to the person they were trying to swindle.

As for people who try hopping offline in the middle of a transaction, again with the aim of stealing someone else’s cryptocurrency, there are separate safety measures in place in the form of time-out periods that, when they expire, ensure that the currency sender gets back his or her money. The blockchain acts as a kind of unbiased arbiter.

Lightning Labs isn’t the only outfit that has sprung up around creating these contracts, but it’s the furthest along, suggests cofounder and CEO Elizabeth Stark, who says more than 1,800 developers are part of a Slack channel and that thousands of volunteers helped her seven-person team find glitches in the alpha version of their open-source software.

Their help enabled them to, starting today, release a beta version that’s open to anyone, though it’s only truly developer friendly at this point. (You have to write command code to use it. Stark says a much friendlier user interface will be available down the road.)

Stark also suggests that because the beta version is just being released that people only transact with the amount of money they might carry in their wallet. In fact, there are limits on how much you can transact using its software, which Stark says is less to protect users from theft than from them “putting their life savings in bitcoin.” (The presumption: that people will start using bitcoin thanks to the Lightning protocol, because it will let them transact far more quickly and easily in tiny amounts than is possible today.)

Finally, for now, Lightning Labs — which is enabling people to transact with bitcoin and litecoin for now — is available on desktops only, though a mobile version is coming.

We asked Stark yesterday about the origin of the company. A former lecturer at Stanford and Yale who taught about digital copyrights, she said she realized in 2016 that if bitcoin was going to be “used by the entire world, it couldn’t happen on blockchain.” Like a lot of people, too, Stark says she got excited by the prospect of micropayments, including for artists and musicians.” When she edited a paper about the Lightning protocol and realized it might be possible for people and computers to send high volumes of small payments — think thousands if not millions, to each other — for there to be a genuine currency of the web, she suggests she jumped in with both feet with cofounder and CTO Olaoluwa Osuntokun.

“He’s the genius behind our work here,” she says of Osuntokun, who has two computer science degrees from UC Santa Barbara and who graduated in 2016.

To learn more, you might enjoy this talk that Stark gave on the importance of the layers that Lightning Labs and others are building atop the blockchain.


Source: Tech Crunch

Volley’s voice games for smart speakers have amassed over half a million monthly users

The rapid consumer adoption of smart speakers like Amazon Echo and Google Home has opened opportunities for developers creating voice apps, too. At least that’s true in the case of Volley, a young company building voice-controlled entertainment experiences for Amazon Alexa and Google Home. In less than a year, Volley has amassed an audience north of 500,000 monthly active users across its suite voice apps, and has been growing that active base of users at 50 to 70 percent month-over-month.

The company was co-founded by former Harvard roommates and longtime friends, Max Child and James Wilsterman, and had originally operated as an iOS consultancy. But around a year and a half ago, Volley shifted its focus to voice instead.

“When we were running the iOS business, we were always sort of hacking around on games and some stuff on the side for fun,” explains Child. “We made a trivia game for iOS. And we made a Facebook Messenger chatbot virtual pet,” he says. The trivia game they built let users play just by swiping on push notifications – a very lightweight form of gameplay they thought was intriguing. “Voice was sort of the obvious next step,” says Child.

Not all their voice games have been successful, however. The first to launch was a game called Spelling Bee that users struggled with because of Alexa’s difficulties in identifying single letters – it would confuse a “B,” “C,” “D,” and “E,” for example. But later titles have taken off.

 

Volley’s name-that-tune trivia game “Song Quiz” was its first breakout hit, and has grown to become the number one game by reviews. The game today has a five-star rating across 8,842 reviews.

Another big hit is Volley’s “Yes Sire,” a choose-your-own-adventure style storytelling game, that’s also at the top of Alexa’s charts. It also has a five-star rating, across 1,031 reviews.

The company says it has over a dozen live titles, with the majority on the Alexa Skill Store and few for Google Assistant/Google Home. But it only has seven or eight in what you would consider “active development.”

Unlike some indie developers who are struggling to generate revenue from their voice applications, Volley has been moderately successful thanks to Amazon’s developer rewards program – the program that doles out cash payments to top performing skills. While the startup didn’t want to disclose exact numbers, it says it’s earning in the five figure range monthly from Amazon’s program.

In addition, Volley is preparing to roll out its own monetization features, including subscriptions and in-app purchases of add-on packs that will extend gameplay.

The company’s games have been well-received for a variety of reasons, but one is that they allow people to play together at the same time – like a modern-day replacement for family game night, perhaps.

“I think a live multiplayer experience with your family or people you’re good friends with, where you can have a fun time together in a room is fairly unusual. I mean, I don’t know about you, but I don’t crowd around my iPhone and play games with my friends. And even with consoles there are significant barriers in understanding how to play” says Child.

“I think that voice enables the live social experience in a way that anyone from five years old to 85 years old can pick up immediately. I think that’s really special. And I think we’re just at the beginning. I’m not going to say we’ve got it all figured out – but I think that’s powerful and unique to these platforms,” he adds.

Volley raised over a million in seed funding ahead of joining Y Combinator’s Winter 2018 class, in a round led by Advancit Capital. Other investors include Amplify.LA, Rainfall, Y Combinator, MTGx, NFX, and angels Hany Nada, Mika Salmi, and Richard Wolpert.

The startup is currently a team of six in San Francisco.

 


Source: Tech Crunch

Teacher in Ghana who used blackboard to explain computers gets some Microsoft love

Teaching kids how to use a computer is hard enough already, since they’re kids, but just try doing it without any computers. That was the task undertaken by Richard Appiah Akoto in Ghana, and his innovative (and labor-intensive) solution was to draw the computer or application on the blackboard in great detail. His hard work went viral and now Microsoft has stepped in to help out.

Akoto teaches at Betenase Municipal Assembly Junior High in the small town of Sekyedomase. He had posted pictures of his magnum opus, a stunning rendition of a complete Microsoft Word window, to Facebook. “I love ma students so have to do what will make them understand wat am teaching,” he wrote. He looks harried in the last image of the sequence.

The post blew up (9.3K reactions at this point), and Microsoft, which has for years been rather quietly promoting early access to computing and engineering education, took notice. It happened to be just before the company’s Education Exchange in Singapore, and they flew him out.

Akoto in Singapore.

It was Akoto’s first time outside of Ghana, and at the conference, a gathering of education leaders from around the world, he described his all-too-common dilemma: The only computers available — one belonging to the school and Akoto’s personal laptop — were broken.

“I wanted to teach them how to launch Microsoft Word. But I had no computer to show them,” he said in an interview with Microsoft at the event. “I had to do my best. So, I decided to draw what the screen looks like on the blackboard with chalk.”

“I have been doing this every time the lesson I’m teaching demands it,” he continued. “I’ve drawn monitors, system units, keyboards, a mouse, a formatting toolbar, a drawing toolbar, and so on. The students were okay with that. They are used to me doing everything on the board for them.”

Pursuing such a difficult method instead of giving up under such circumstances is more than a little admirable, and the kids are certainly better off for having a teacher dedicated to his class and subject. A little computer literacy can make a big difference.

“They have some knowledge about computers, but they don’t know how to actually operate one,” Akoto said. So Microsoft has offered to provide “device and software support” for the school (I’ve asked for specifics, though they may depend on the school’s needs), and Akoto will get a chance to go through Microsoft’s educator certification program (which has other benefits).

Obviously if this school is having this issue, countless more are as well, and could use similar support. And as Akoto himself eloquently pointed out to NPR when his post first went viral, “They are lacking more than just equipment.”

But at least in this case there are a couple of hundred students who will be getting an opportunity they didn’t have before. That’s a start.


Source: Tech Crunch

Google spent about $270K to close pay gaps across race and gender

Google says there are currently no “statistically significant” pay gaps at the company across race and gender. This is based on the company’s most recent pay analysis, where it looked at unexplained pay discrepancies based on gender and race and then made adjustments where necessary, Google wrote in a blog post today.

In total, Google found statistically significant pay differences for 228 employees across six job groups. So, Google increased the compensation for each of those employees, which came out to about $270,000 in total before finalizing compensation planning. That group of 228 employees included women and men from several countries, including the U.S., as well as black and Latinx employees in the U.S.

In its analysis, Google says it looked at every job group with at least 30 employees and at least five people for every demographic group for which Google has data, like race and gender. You can read more about Google’s methodology on its blog.

Earlier this year, Google was hit with a revised gender-pay class-action lawsuit that alleges Google underpaid women in comparison with their male counterparts and asked new hires about their prior salaries.

The revised lawsuit added a fourth complainant, Heidi Lamar, who was a teacher at Google’s Children Center in Palo Alto for four years. The original suit was dismissed in December due to the fact the plaintiffs defined the class of affected workers too broadly. Now, the revised lawsuit focuses on those who hold engineer, manager, sales or early childhood education positions.

Prior to the class-action lawsuit, the Department of Labor looked into Google’s pay practices. Last January, the DoL filed a lawsuit against Google in an attempt to gain compensation data, as part of a routine compliance evaluation. In April, the DoL testified in court that pay inequities at Google are “systemic.”

Google, however, denied the DoL’s claims that the pay inequities at the company were systemic. In June, an administrative law judge sided with Google, ruling that it did not need to hand over all of the data the DoL requested.


Source: Tech Crunch

Rihanna calls out Snapchat for tone-deaf ad that made light of domestic violence

Rihanna is not happy with Snap for the tone-deaf ad the company let run on Snapchat. A couple of days ago, an advertisement on Snap appeared that alluded to Chris Brown’s violent assault on Rihanna back in 2009.

The advertisement, which was for a game called “Would You Rather,” asked if people would “rather slap Rihanna or punch Chris Brown.” Snap has since removed the advertisement, but Rihanna today said she’s concerned about the message it sends to other survivors of domestic violence.

Here’s what she said in an Instagram story:

Now SNAPCHAT I know you already know you ain’t my fav app out there! But I’m just trying to figure out what the point was with this mess! I’d love to call it ignorance, but I know you ain’t that dumb! You spent money to animate something that would intentionally bring shame to DV victims and made a joke of it!!!! This isn’t about my personal feelings, cause I don’t have much of them…but all the women, children, and men that have been victims of DV in the past and especially the ones who haven’t made it out yet….you let us down! Shame on you. Throw the whole app-oligy away.

In a statement to TechCrunch, Snap apologized for the ad ever going up in the first place. When directly asked about Rihanna’s message, Snap called the advertisement “disgusting.”

“We are so sorry we made the terrible mistake of allowing it through our review process,” a Snap spokesperson said. “We are investigating how that happened so that we can make sure it never happens again.”

Snap has also since blocked the maker of “Would You Rather” from advertising on its platform. On Monday, when people first noticed the ad, Snap said it was reviewed but approved in error.

“We immediately removed the ad last weekend, once we became aware,” Snap said earlier this week.

This massive fail by Snap comes shortly after both Snapchat and Instagram had to remove their Giphy GIF sticker features after a racist GIF appeared as an option.

To be clear, this ad never should have been approved in the first place, according to the company’s advertising policies. Here’s Rihanna’s Instagram post:


Source: Tech Crunch

Wikipedia wasn’t aware of YouTube’s conspiracy video plan

YouTube has a plan to combat the abundant conspiracy theories that feature in credulous videos on its platform; not a very good plan, but a plan just the same. It’s using information drawn from Wikipedia relevant to some of the more popular conspiracy theories, and putting that info front and center on videos that dabble in… creative historical re-imaginings.

The plan is being criticized from a number of quarters (including this one) for essentially sloughing responsibility about this harmful content on to another, volunteer-based organization. But it turns out that’s not even a responsibility that Wikipedia even know it was taking on.

Wikimedia Foundation exec director Katherine Maher notes that YouTube did this whole thing “independent” of their organization, and an official statement from Wikimedia says that it was “not given advance notice of this announcement.”

Everyone on the Wikimedia side is taking this pretty much in stride, however, expressing happiness at seeing their content used to drive the sharing of “free knowledge,” but it does seem like something that YouTube could’ve flagged in advance before announcing the new feature on stage at SXSW.

Maybe YouTube couldn’t say anything because the Illuminati bound them to secrecy… because of the chemtrails.


Source: Tech Crunch

Voicery makes synthesized voices sound more like humans

Advancements in A.I. technology has paved the way for breakthroughs in speech recognition, natural language processing, and machine translation. A new startup called Voicery now wants to leverage those same advancements to improve speech synthesis, too. The result is a fast, flexible speech engine that sounds more human – and less like a robot. Its machine voices can then be used anywhere a synthesized voice is needed – including in new applications, like automatically generated audiobooks or podcasts, voiceovers, TV dubs, and elsewhere.

Before starting Voicery, co-founder Andrew Gibiansky had worked at Baidu Research, where he led the deep learning speech synthesis team.

While there, the team developed state of the art techniques in the field of machine learning, published papers on speech constructed from deep neural networks and artificial speech generation, and commercialized its technology in production-quality systems for Baidu.

Now, Gibiansky is bringing that same skill set to Voicery, where he’s joined by co-founder Bobby Ullman, who previously worked at Palantir on databases and scalable systems.

“In the time that I was at Baidu, what became very evident is that the revolution in deep learning and machine learning was about to happen to speech synthesis,” explains Gibiansky. “In the past five years, we’ve seen that these new techniques have brought an amazing gains in computer vision, speech recognition, and in other industries – but it hasn’t yet happened with synthesizing human speech. We saw that if we could use this new technology to build speech synthesis engines, we could do it so much better than everything that currently exists.” 

Specifically, the company is leveraging newer deep learning technologies to create better synthesized voices more quickly than before.

In fact, the founders built Voicery’s speech synthesis engine in just two-and-half months.

Unlike traditional voice synthesizing solutions, where a single person records hours upon hours of speech that’s then used to create the new voice, Voicery trains its system on hundreds of voices at once.

It can also use varying amounts of speech input from any one person. Because of how much data it takes in, the system sounds more human as it learns the correct pronunciations, inflections and accents from a wider variety of source voices.

The company claims its voices are nearly indistinguishable from humans – it even published a quiz on its website that asks visitors to see if they can identify which ones are synthesized and which are real. I found that you’re still able to identify the voices as machines, but they’re much better than the machine reader voices you may be used to.

Of course, given the rapid pace of technology development in this field – not to mention the fact that the team built their system in a matter of months – one has to wonder why the major players in voice computing couldn’t just do something similar with their own in-house engineering teams.

However, Gibiansky says that Voicery has the advantage of being the first out of the gate with its technology that capitalizes on the machine learning advancements.

“None of the currently published research is quite good enough for what we wanted to do, so we had to extend that a fair bit,” he notes. “Now we have several voices that are ready, and we’re starting to find customers to partner with.”

Voicery already has a few customers piloting the technology, but nothing to announce at this time as those talks are in various stages.

The company is charging customers an upfront fee to develop a new voice for a customer, and then charges a per usage fee.

The technology can be used where voice systems exist today, like in translation apps, GPS navigation apps, voice assistant apps, or screen readers, for example. But the team also sees the potential for it to open up new markets, given the ease of creating synthesized voices that really sound like people. This includes things like synthesizing podcasts, reading the news (think: Alexa’s “Flash Briefing”), TV dub-ins, voices for characters in video games, and more.

“We can move into spaces that fundamentally haven’t been using the technology because it hasn’t been high enough quality. And we have some interest from companies that are looking to do this,” says Gibiansky.

Voicery, based in San Francisco, is bootstrapped save for the funding it has received by participating in Y Combinator’s Winter 2018 class. It’s looking to raise additional funds after YC’s Demo Day.


Source: Tech Crunch

Reddit set to begin rolling out promoted post ads in their native apps

For how massive Reddit is in terms of user base, it’s really gotten by for a long time having a product that advanced about as quickly as Drudge Report. That’s been changing lately, as the company has looked to mature their platform with user-centric features that make surfing content easier and keep everything a bit more connected.

The company didn’t raise $200 million from top investors just because they thought the company could deliver memes more beautifully. The company has — in fact — barely touched advertising, and few entities know more about its users interests than Reddit .

Next week, the company will launch native promoted post ads on its Reddit iOS app with an Android following soon after. The company informed advertisers of this in an email, MarketingLand reports.  The apps have had rocky starts but have proven to offer a vastly improved user experience over what came before it. There are still some blind spots here and there, but for how slowly the company moved previously, the apps are fairly impressive.

Few user bases are more vocal or hesitant to change than the hundreds of million of monthly active Redditors, and advertisers are likely similarly hesitant to get involved with a platform that has churned out controversy at a steady pace over the past few years. Nevertheless, Reddit has already set its course towards building out a better ads product and native promoted ads represent a big step towards that.


Source: Tech Crunch

Equifax exec charged with insider trading, selling shares ahead of hack news

Former Equifax exec Jun Ying has been charged with insider trading, according to the Securities and Exchange Commission. Ying is accused of knowing that Equifax had been hacked and selling company shares before the public was notified.

Ying, who was “next in line to the be company’s global CIO, allegedly used confidential information entrusted to him by the company to conclude that Equifax had suffered a serious breach,” says the SEC release. He sold $1 million in shares and avoided a potential loss of $117,000.

Following the revelation of a widespread hack at the credit reporting agency, Equifax shares took a tumble on the stock market. Shares were above $142 and quickly fell to beneath $93 in the subsequent days.

Ying wasn’t the only employee who sold shares, resulting in several execs getting accused of insider trading. TechCrunch wrote something at the time about different executives, and received this defense from Equifax, particularly with regards to the CFO.

“As announced in the press release, Equifax discovered the cybersecurity incident on Saturday, July 29. The company acted immediately to stop the intrusion.

The three executives who sold a small percentage of their Equifax shares on Tuesday, August 1, and Wednesday, August 2, had no knowledge that an intrusion had occurred at the time they sold their shares.”


Source: Tech Crunch

Google explores how light fields shape VR environments in new free app

Lighting can make or break the right photo — when it comes to static environments inside virtual reality that users can move around in, this becomes exponentially more true.

Today, Google released a new app for VR devices focused on helping users make sense of “light fields.” They’ve also got a blog post running down some of the research work they’re doing.

Light fields — in a practical sense — are basically different perspectives of a point in space based on how that lighting looks from that angle. If you look at something like your phone screen, part of what makes it look realistic are how images reflect off of it. Most physical objects don’t offer so clear a mirror of the world around it, but even things like your own skin can have a dramatically different looking texture based on where your eyes are.

In a game engine-rendered world, if you have enough compute power you can reflect the hell out of everything to varying levels of success. When it comes to light fields based on real-world camera capture, companies like Google are using multiple cameras to capture multiple perspectives of objects and infer the perspectives between the lenses. With this you can get perspective of objects that move with you with lighting that changes as you move your head.

It’s a complicated way of saying that real-world scenarios look a lot more realistic and just… better. That’s just my take on it, but if you want to read more on how the Google sees Google’s new app, “Welcome to Light Fields” seeks to educate users on what exactly light fields are and how important the technology could be to unlocking more pleasant-feeling virtual reality experiences. The app seems to consist of a number of fairly simple scenarios where users can walk around and observe how light changes these environments.

The app is available for the HTC Vive, Oculus Rift and Windows Mixed Reality platforms. If you’re wondering why the company left out its own Daydream platform, it’s because you really need positional tracking in order to see what’s happening with light fields. Daydream will be gaining that tracking soon with the launch of Lenovo’s standalone 6DoF headset, but we’re still waiting on that one to go on sale.

Light fields present a number of technical issues for developers that go beyond just capturing them; chief among the issues is bandwidth. Light fields turn every file into a potentially massive endeavor to ship easily. For the photo environments Google seems to be playing with here, it’s still difficult enough, but video files that are just a few minutes long can stretch into the terabytes quite quickly, so there are clearly still some things to figure out here.


Source: Tech Crunch