Google releases a searchable database of US political ads

In an effort to provide more transparency and deliver on a promise to Congress, Google just published an archive of political ads that have run on its platform.

Google’s new database, which it calls the Ad Library, is searchable through a dedicated launch page. Anyone can search for and filter ads, viewing them by candidate name or advertiser, spend, the dates the ads were live, impressions and type. For anyone looking for the biggest ad budget or the farthest reaching political ad, the ads can be sorted by spend, impressions and recency, as well. Google also provided a report on the data, showing ad spend by U.S. state, by advertiser and by top keywords.


The company added a bit of context around its other recent ad transparency efforts:

Earlier this year, we took important steps to increase transparency in political advertising. We implemented new requirements for any advertiser purchasing election ads on Google in the U.S.—these advertisers now have to provide a government-issued ID and other key information that confirms they are a U.S. citizen or lawful permanent resident, as required by law. We also required that election ads incorporate a clear “paid for by” disclosure.

The search features are pretty handy, but a few things are missing. While Google’s database does collect candidate ads in the U.S. it does not include issue ads — broader campaigns meant to influence public thought around a specific political topic — nor does it collect state or local ads. The ads are all U.S.-only, so elections elsewhere won’t show up in here either. Google says that it is collaborating with experts on potential tools that “capture a wider range of political ads” but it gave no timeline for that work. For now, ads that the tool does capture will be added into the library on a weekly basis.


Source: Tech Crunch

Uber reports Q2 losses of $404 million, up 32 percent from Q1

While Uber isn’t required to disclose its financial results, Uber has done so for the past few quarters as it gears up to go public next year. In Q2 2018, Uber’s net revenue was up 8 percent quarter-over-quarter, at $2.7 billion. Year-over-year, that’s a 51 percent increase.

Uber recorded gross bookings — the total taken for all of Uber’s transportation services — of $12 billion, a six percent quarter-over-quarter increase and a 41 percent year-over-year increase. But while Uber’s gross bookings increased, so did its losses. In Q2, Uber had adjusted EBITDA losses of $404 million compared to $304 million in losses in Q1.

Uber’s losses added up, given its investments in Eats, India, the Middle East, bikes and scooters. This quarter, Uber expanded Eats into a number of new cities in Europe, the Middle East and Africa, acquired food delivery startup Andoannounced its expansion of JUMP bikes into Europe and made its scooter ambitions official.

Other key stats for Uber’s Q2 2018:

  • Adjusted EBITDA margin: 3.4 percent of gross bookings (in Q2 ’17, that was 6.3 percent)
  • Gross cash: $7.3 billion (+1 billion quarter-over-quarter)

“We had another great quarter, continuing to grow at an impressive rate for a business of our scale,” Uber CEO Dara Khosrowshahi said in a statement. “Going forward, we’re deliberately investing in the future of our platform: big bets like Uber Eats; congestion and environmentally friendly modes of transport like Express Pool, e-bikes and scooters; emerging businesses like Freight; and high-potential markets in the Middle East and India where we are cementing our leadership position.”

While Uber technically had a good quarter, it doesn’t mean that all is well. Regarding Uber’s self-driving car efforts, the company has spent between $125 million and $200 million a quarter over the last 18 months, The Information reports. According to The Information’s sources, some of Uber’s investors are urging the company to get rid of its self-driving car program, which has been the source of many headaches at Uber as of late.

Uber declined to comment on The Information’s reporting.

In March, one of Uber’s self-driving cars struck and killed a pedestrian in Tempe, Arizona. In the weeks and months following the accident, Uber officially pulled the plug on its self-driving car operations in Arizona and laid off self-driving car operators in San Francisco and Pittsburgh.

As Uber prepares for its 2019 IPO, the name of the game is to reduce losses. In July, Uber shut down its self-driving trucks division. But Uber Freight, which matches drivers with cargo needing to be shipped, is reportedly on track to make $500 million in the next 12 months.

Meanwhile, Uber is aiming to take its ride-hail network into the skies with uberAIR. Uber’s plan is to develop and commercially deploy these air taxis by 2023. But in recent months, Uber has lost two key executives; Head of Policy for Autonomous Vehicles and Urban Aviation Justin Erlich and Uber Chief Product Officer Jeff Holden, who oversaw Uber Elevate, left the company.

Khosrowshahi will be joining us at Disrupt SF in September. You don’t want to miss it.


Source: Tech Crunch

Coinbase acquires Distributed Systems to build ‘Login with Coinbase’

Coinbase wants to be Facebook Connect for crypto. The blockchain giant plans to develop ‘Login with Coinbase’ or a similar identity platform for decentralized app developers to make it much easier for users to sign up and connect their crypto wallets. To fuel that platform, today Coinbase announced it has acquired Distributed Systems, a startup founded in 2015 that was building identity standard for dApps called the Clear Protocol.

The five-person Distributed Systems team and its technology will join Coinbase. Three of the team members will work with Coinbase’s Toshi decentralized mobile browser team, while CEO Nikhil Srinivasan and his co-founder Alex Kern are forming the new decentralized identity team that will work on the ‘Login with Coinbase’ product. They’ll be building it atop the “know your customer” anti-money laundering data Coinbase has on its 20 million customers. Srinivasan tells me the goal is to figure out “How can we allow that really rich identity data to enable a new class of applications?”

Distributed Systems had raised a $1.7 million seed round last year led by Floodgate and was considering raising a $4 million to $8 million round this summer. But Srinivasan says “No one really understood what we’re building”, and it wanted a partner with KYC data. It began talking to Coinbase Ventures about an investment, but after they saw Distributed Systems’ progress and vision, “they quickly tried to move to find a way to acquire us.”

Distributed Systems began to hold acquisition talks with multiple major players in the blockchain space, and the CEO tells me it was deciding between going to “Facebook, or Robinhood, or Binance, or Coinbase”, having been in formal talks with at least one of the first three. Coinbase “were able to convince us they were making big bets, weaving identity across their products.” The financial terms of the deal weren’t disclosed.

Coinbase’s plan to roll out the ‘Login with Coinbase’-style platform is an SDK that others apps could integrate, though that won’t necessarily be the feature’s name. That mimics the way Facebook colonized the web with its SDK and login buttons that splashed its brand in front of tons of new and existing users. This made turned Facebook into a fundamental identity utility beyond its social network.

Developers eager to improve conversions on their sign up flow could turn to Coinbase instead of requiring users to set up whole new accounts and deal with crypto-specific headaches of complicated keys and procedures for connecting their wallet to make payments. One prominent dApp developer told me yesterday that forcing users to set up the MetaMask browser extension for identity was the part of their signup flow where they’re losing the most people.

This morning Coinbase CEO Brian Armstrong confirmed these plans to work on an identity SDK. When Coinbase investor Garry Tan of Initialized Capital wrote that “The main issue preventing dApp adoption is lack of native SDK so you can just download a mobile app and a clean fiat to crypto in one clean UX. Still have to download a browser plugin and transfer Eth to Metamask for now Too much friction”, Armstrong replied “On it :)”

In effect, Coinbase and Distributed Systems could build a safer version of identity than we get offline. As soon as you give your social security number to someone or it gets stolen, it can be used anywhere without your consent and that leads to identity theft. Coinbase wants to build a vision of identity where you can connect to decentralized apps while retaining control. “Decentralized identity will let you prove that you own an identity, or that you have a relationship with the Social Security Administration, without making a copy of that identity” writes Coinbase’s PM for identity B Byrne, who’ll oversee Srinivasan’s new decentralized identity team. “If you stretch your imagination a little further, you can imagine this applying to your photos, social media posts, and maybe one day your passport too.”

Considering Distributed Systems and Coinbase are following the Facebook playbook, they may soon have competition from the social network. It’s spun up its own blockchain team and an identity and single sign-on platform for dApps is one of the products I think Facebook is most likely to build. But given Coinbase’s strong reputation in the blockchain industry and its massive head start in terms of registered crypto users, today’s acquisition well positions it to be how we connect our offline identity with the rising decentralized economy.


Source: Tech Crunch

Spotify is falling behind on lyrics and voice

Spotify’s lack of full lyrics support and its minimal attention to voice are beginning to become problems for the streaming service. The company has been so focused on the development of its personalization technology and programming its playlists, it has overlooked key features that its competitors – including Apple, Google, and Amazon – today offer and are now capitalizing on.

For example, in the updated version of Apple Music rolling out this fall with iOS 12, users won’t just have access to lyrics in the app as before, they will also be able to perform searches by lyrics instead of only by the artist, album, or song title.

And Apple Music is actually playing catch up with Amazon on this front.

Amazon Music, which has quietly grown to become the third largest music streaming service, allows users to view the lyrics as songs play, and ties that to its Alexa voice platform. Amazon Music users with an Alexa device can also search for songs by lyrics just by saying “play the song that goes…”.

The company has been offering this capability for close to two years. While it had originally been one of Alexa’s hidden gems, today asking Alexa to pull up a song by its lyrics is considered a standard feature.

Though Google has lagged behind Apple, Spotify and Amazon in music, its clever Google Assistant is capable of search-by-lyrics, too. And as an added perk, it can also work like Shazam to identify a song that’s playing nearby.

With the rise of voice-based computing, features like asking for songs with verbal commands or querying databases of lyrics by voice are now expected features.

And where’s Spotify on this?

It has launched lyrics search only in Japan so far, and refuses to provide a timeline as to when it will make this a priority in other markets. Even tucked away in the app’s code are references to lyrics tests only in the non-U.S. markets of Thailand and Vietnam.

Those tests have been underway since the beginning of the year, we understand from sources. But the attention being given to these tests is minimal – Spotify isn’t measuring user engagement with the lyrics feature at this point. And Spotify CEO Daniel Ek wasn’t even aware his team was working on these lyrics tests, we heard, which implies a lack of management focus on this product.

Meanwhile, competitors like Apple and Amazon have dedicated lyrics teams.

We asked Spotify multiple times if it was currently testing lyrics in the U.S. (You can see one person who claims they gained access here, for example.) But the company never responded to our questions.

Image credit: Imgur via Reddit user spalatidium

Some Spotify customers who largely listen to popular music may be confused about the lack of a full lyrics product in the app. That’s because Spotify partnered with Genius in 2016 to launch “Behind the Lyrics,” which offers lyrics and music trivia on a portion of its catalog. But you don’t see all the song’s lyrics when the music plays because they’re interrupted with facts and other background  information about the song, the lyrics’ meaning, or the artist.

That same year, Spotify also ditched its ties with Musixmatch, which had been providing its lyrics support, as the two companies could no longer come to an agreement. There was expectation from users that lyrics would return at some point – but only “Behind the Lyrics” emerged to fill the void.

Demand for a real lyrics feature remains strong, though. Users regularly post on social media and Reddit about the topic.

A request for lyrics’ return is also one of the most upvoted product ideas on Spotify’s user feedback forum. It has 9,237 “likes,” making it the second-most popular request.

(The idea has been flagged “Watch this Space,” but it’s been tagged like that for so long it’s no longer a promise of something that’s soon to come.) There is no internal solution in the works, we understand, and it’s not working on a new deal with a third-party at this time.

 

The lack of lyrics is becoming a problem in other areas, as well, now that competitors are launching search-by-lyrics features that work via voice commands.

In fact, Spotify was late, in general, to address users’ interest in voice assistance – even though a primary use case for music listening is when you’re on the go – like, in the car, out walking or jogging, at the gym, biking, etc.

It only began testing a voice search option this spring, accessible through a new in-app button. Now rolled out to mobile users on Spotify Premium, the voice search product works via a long-press on the Search button in the app. You can then ask Spotify to play music, playlists, podcasts, and videos.

But the feature is still wonky. For one thing, hiding it away as a long press-triggered option means many users probably don’t know it exists. (And the floating button that pops up when you switch to search is hard to reach.) Secondly, it doesn’t address the primary reason users want to search by voice: hands-free listening.

Meanwhile, iPhone/HomePod users can tell Siri to play music with a hands-free command; Google Assistant/Google Home users can instruct the helper to play their songs – even if they only know the lyrics. And Amazon Music’s Alexa integration is live on Echo speakers, and available hands-free in its Music app.

Even third-party music services like Pandora are tapping into the voice platforms’ capabilities to provide search by lyrics. For example, Pandora Premium launched this week on Google Assistant devices like the Google Home, and offers search-by-lyrics powered by Google Assistant.

Spotify can’t offer a native search-by-lyrics feature in its app, much less search-by-lyrics using voice commands option, because it doesn’t even have fully functional lyrics.

Voice and lyrics aren’t the only challenges Spotify is facing going forward.

Spotify also lacks dedicated hardware like its own Echo or HomePod. Given the rise of voice-based computing and voice assistants, the company has the potential to cede some portion of the market as consumers end up buying into the larger ecosystems provided by the main tech players: Siri/HomePod/Apple Music vs. Google Assistant/Google Home/Google Play Music (or YouTube Music) vs. Alexa/Echo/Amazon Music (all promoted by Prime).

For now, Spotify works with partners to make sure its service performs on their platforms, but Apple isn’t playing nice in return.

Elsewhere, Spotify may play – even by voice – but won’t be as fully functional as the native solutions. With Spotify as the default service on Echo devices, for example, Alexa can’t always figure out commands that instruct it to play music by lyrics, activity, or mood – commands that work well with Amazon Music, of course.

Other cracks in Spotify’s dominance are starting to show, too.

Amazon Music has seen impressive growth, thanks to adoption in four key Prime markets, U.S., Japan, Germany and the U.K.. With now 12% of the music streaming market, it has become the dark horse that’s been largely ignored amid discussions of the Amazon vs Spotify battle. But it’s not necessarily one to count out just yet.

YouTube Music, though brand new, has managed to snag Lyor Cohen as its Global Music Head, while Spotify’s latest headlines are about losing Troy Carter.

Meanwhile, Apple CEO Tim Cook just announced during the last earnings call that Apple Music has moved ahead of Spotify in North America. He also warned against ceding too much control to algorithms, in a recent interview, making a sensible argument for maintaining music’s “spiritual role” in our lives.

“We worry about the humanity being drained out of music, about it becoming a bits-and-bytes kind of world instead of the art and craft,” Cook mused.

Apple was late to music streaming, having been so tied to its download business. But it also had the luxury of time to get it right, knowing that its powerful iPhone platform means anything it launches has a built-in advantage. (And it’s poised to offer TV shows as a part of its subscription, too, which could be a further draw.)

How much time does Spotify have to get it right?

Despite these concerns, Spotify doesn’t need to panic yet – it still has more listeners, more paying customers, and more consumer mindshare in the music streaming business. It has its popular playlists and personalization features. It has its RapCaviar. But it will need to plug its holes to keep up where the market is heading, or risk losing customers to the larger platforms in the months ahead.


Source: Tech Crunch

Grabb-It wants to turn your car’s window into a trippy video billboard

It reminds me of something out of Blade Runner.

Maybe it’s because it looks a bit futuristic – a bit unreal. Maybe it’s because I’m looking at an ad somewhere I never expected to see one, like the skyscraper-height ads of Ridley Scott’s future.

Grabb-It turns a car’s side rear window into a full color display, playing location-aware ads to anyone who might be standing curbside. They’re currently aiming to work with rideshare/delivery drivers, enabling them to make a bit of extra coin while doing the driving they’re already doing.

As the driver crosses town, the ads can automatically switch to focus on businesses nearby. Near the ball park? It might pitch you on tickets for tonight’s game. Over in The Mission? It could play an ad about happy hour at the bar behind you.

So how’s it work? I couldn’t figure it out at first glance – but once they opened the car door, it all clicked.

The key: projection. It turns your window into a rear projection TV on wheels, of sorts.

Grabb-It applies a material to the inside of a car’s right rear window to act as a projection surface. The material is thin enough that the window can still be opened — but, in what might annoy some passengers, not thin enough that you can see much through it. They mount a small projector inside the car and point it toward the window, blasting an image bright enough to see from the outside. I saw it running in a dim below-ground parking lot and outside in direct sunlight, and the image was surprisingly clear in both cases.

The end result is quite neat to see (which is something I’m really not used to saying about tech meant to show me ads.) Because the projection material is custom cut for each car, the image can cover pretty much the entire surface of the window glass. It gives the illusion of a display custom built for the contours of the car.

It’s meant to only run when the driver is between rides. Once a passenger hops in the car, the projector is shut off – because, well, no one wants a projector blasting light in their face on the way to their next meeting.

While the company is working on its own hardware kit, the build I saw was an early iteration running a small off-the-shelf projector. Even at this stage, it’s a pretty effective demo. While this prototype requires the driver to manually toggle the projector by remote control, Grabb-It’s founders tell me their eventual hardware will automatically detect when the rear doors open and cut the projector on-the-fly. The image juddered a bit as the idling engine vibrated, though that seems like something that could be improved with better damping.

I am a bit wary of the distraction factor; will a fully animated ad playing on the car next to you work out to eyes off the road ahead? While Grabb-It tells me they’re working with the proper authorities to ensure it’s all road-legal, I imagine people might contest it as more cars utilizing the tech hit the streets.

Grabb-It says they’ll cover the cost of installation for drivers – and if a driver decides to remove it, it’s just a matter of unmounting the projector and peeling the projection material from the window.

The company tells me it’s currently testing with around 25 drivers around San Francisco, with earnouts working out to around $300 a month for those driving 40 hours a week. It’s not enough to pay the bills on its own, but it’s a solid chunk of change for something that will, if all goes to plan, be entirely automated.

Grabb-It is part of Y Combinator’s Summer 2018 class, and has raised $100k outside of YC from Lyft founding investor Sean Aggarwal.


Source: Tech Crunch

Smart speaker sales on pace to increase 50 percent by 2019

It seems Amazon didn’t know what it had on its hands when it released the first Echo in late-2014. The AI-powered speaker formed the foundation of the next been moment in consumer electronics. Those devices have helped mainstreaming consumer AI and open the door to wide scale adoption of connected home products. 

New numbers from NPD, naturally, don’t show any sign of flagging for the category. According to the firm, the devices are set for a 50-percent dollar growth from between 2016-2017 to 2018-2019. The category is projected to add $1.6 billion through next year.

The Echo line has grown rapidly over the past four years, with Amazon adding the best-selling Dot and screen enabled products like the Spot and Show. Google, meanwhile, has been breathing down the company’s next with its own Home offerings. The company also recently added a trio of “smart displays” designed by LG, Lenovo and JBL.

A new premium category has also arisen, led by Apple’s first entry into the space, the HomePod. Google has similarly offered up the Home Max, and Samsung is set to follow suit with the upcoming Galaxy Home (which more or less looks like a HomePod on a tripod).

As all of the above players were no doubt hoping, smart speaker sales also appear to be driving sales of smart home products, with 19 percent of U.S. consumers planning to purchase one within the next year, according to the firm.


Source: Tech Crunch

StarVR’s One headset flaunts eye-tracking and a double-wide field of view

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.

That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)

On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.

In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.

To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.

It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.

The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math:

16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.

That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.

The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.

Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.

One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.

The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.

Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.


Source: Tech Crunch

Cytera Cellworks aims to bring cell culture automation to your dinner plate

Cytera Cellworks hopes to revolutionize the so-called ‘clean meat’ industry through the automation of cell cultures — and that could mean one day, if all goes to plan, the company’s products could be in every grocery store in America.

Cytera is a ways off from that happening, though. Founded in 2017 by two college students in the U.K., Ignacio Willats and Ali Afshar, Cytera uses robotic automation to configure cell cultures used in things like growing turkey meat from a petri dish or testing stem cells.

The two founders — Willats, the events and startups guy and Afshar the scientist, like to do things differently to better configure the lab as well — like strapping GoPros to lab workers’ heads, for instance. The two came together at the Imperial College of London to run an event for automation in the lab and from there formed their friendship and their company.

“At the time, lab automation felt suboptimal,” Afshar told TechCrunch, further explaining he wanted to do something with a higher impact.

Cellular agriculture, or growing animal cells in a lab, seems to hit that button and the two are currently enrolled in Y Combinator’s Summer 2018 cohort to help them get to the next step.

There’s been an explosion in the lab-made meat industry, which relies on taking a biopsy of animal cells and then growing them in a lab to make the meat versus getting it from an actual living, breathing animal. In just the last couple of years startups like Memphis Meats have started to pop up, offering lab meat to restaurants. Even the company known for its vegan mayo products Hampton Creek (now called Just) is creating a lab-grown foie gras.

Originally, the company was going to go for general automation in the lab but had enough interest from clients and potential business in just the cell culture automation aspect they changed the name for clarity. Cytera already has some promising prospects, too, including a leading gene therapy company the two couldn’t name just yet.

Of course, automation in the lab is nothing new and big pharma has already poured billions into it for drug discovery. One could imagine a giant pharma company teaming up with a meat company looking to get into the lab-made meat industry and doing something similar but so far Willats and Afshar says they haven’t really seen that happening. They say bigger companies are much more likely to partner with smaller startups like theirs to get the job done.

Obviously, there are trade-offs at either end. But, should Cytera make it, you may find yourself eating a chicken breast one day built by a company who bought the cells made in the Cytera lab.


Source: Tech Crunch

Twitter is purging accounts that were trying to evade prior suspensions

Twitter announced this afternoon it will begin booting accounts off its service from those who have tried to evade their account suspension. The company says that the accounts in question are users who have been previously suspended on Twitter for their abusive behavior, or for trying to evade a prior suspension. These bad actors have been able to work around Twitter’s attempt to remove them by setting up another account, it seems.

The company says the new wave of suspensions will hit this week and will continue in the weeks ahead, as it’s able to identify others who are “attempting to Tweet following an account suspension.” 

Twitter’s announcement on the matter – which came in the form of a tweet – was light on details. We asked the company for more information. It’s unclear, for example, how Twitter was able to identify the same persons had returned to Twitter, how many users will be affected by this new ban, or what impact this will have on Twitter’s currently stagnant user numbers.

Twitter was not able to answer our questions, when asked for comment.

The company has been more recently focused on aggressively suspending accounts, as part of the effort to stem the flow of disinformation, bots, and abuse on its service. The Washington Post, for example, said last month that Twitter had suspended as many as 70 million accounts between the months of May and June, and was continuing in July at the same pace. The removal of these accounts didn’t affect the company’s user metrics, Twitter’s CFO later clarified.

Even though they weren’t a factor, Twitter’s user base is shrinking. The company actually lost a million monthly active users in Q2, with 335 million overall users and 68 million in the U.S. In part, Twitter may be challenged in growing its audience because it’s not been able to get a handle on the rampant abuse on its platform, and because it makes poor enforcement decisions with regard to its existing policies.

For instance, Twitter is under fire right now for the way it chooses who to suspend, as it’s one of the few remaining platforms that hasn’t taken action against conspiracy theorist Alex Jones.

The Outline even hilariously (???) suggested today that we all abandon Twitter and return to Tumblr. (Disclosure: Oath owns Tumblr and TC. I don’t support The Outline’s plan. Twitter should just fix itself, even if that requires new leadership.)

In any event, today’s news isn’t about a change in how Twitter will implement its rules, but rather in how it will enforce the bans it’s already chosen to enact.

In many cases, banned users would simply create a new account using a new email address and then continue to tweet. Twitter’s means of identifying returning users has been fairly simplistic in the past. To make sure banned users didn’t come back, it used information like the email, phone and IP address to identify them.

For it to now be going after a whole new lot of banned accounts who have been attempting to avoid their suspensions, Twitter may be using the recently acquired technology from anti-abuse firm Smyte. At the time of the deal, Twitter had praised Smyte’s proactive anti-abuse systems, and said it would soon put them to work.

This system may pick up false positives, of course – and that could be why Twitter noted that some accounts could be banned in error in the weeks ahead.

Reached for comment, Twitter declined to answer our specific questions and said it could also not go into further details as that would give those attempting to evade a suspension more insight into its detection methods.

“This is a step we’re taking to further refine our work and close existing gaps we identified,” a spokesperson said. “This is specifically targeting those previously suspended for abusive behavior. Nothing to share on amount of accounts impacted since this work will remain ongoing, not just today.”

Updated, 8/14/18, 3:51 PM ET with Twitter’s comment. 


Source: Tech Crunch

Come watch the Equity podcast record live at Disrupt SF 2018

Disrupt SF is right around the corner, which means startupland is prepping to congregate once again in the city for another epic run of investors, startups and celebrities. This year, Disrupt is heading to Moscone West, so the event will be bigger and better than ever.

And I have some good news for you. Initialized Capital’s Garry Tan will join Connie Loizos and Alex Wilhelm live on the Showcase Stage at 3 pm on Thursday, September 6, to dig through the latest, greatest and worst from the world of venture capital.

That’s right, you can come to Disrupt and watch us sit on tall stools holding mics while we talk about the week’s money news in front of a bustling crowd of onlookers. Live tapings are fun because we can’t run the intro a second time if we mess it up. So come on down and hang out with us. Alex may even wear a shirt with buttons.

And it gets better. If you want to obtain a discounted ticket to Disrupt (and why wouldn’t you?), head to the ticket page and use the code “EQUITY” to get 15 percent off. Come for Equity and stay to see Aileen Lee, Reid Hoffman, Drew Houston, Anne Wojcicki, Arlan Hamilton, Ashton Kutcher, Mike Judge and so very many more people you’ve heard of on the Disrupt stage. To whet your appetite until the big show begins, click here to see the full agenda. It’s a good one. See you at Disrupt!

For more Equity, head here to catch our latest episode. Equity drops every Friday at 6:00 am PT, so subscribe to us on Apple PodcastsOvercast, Pocket Casts, Downcast and all the casts.


Source: Tech Crunch