UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 


Source: Tech Crunch

Crown, a new app from Tinder’s parent company, turns dating into a game

If you’re already resentful of online dating culture and how it turned finding companionship into a game, you may not be quite ready for this: Crown, a new dating app that actually turns getting matches into a game. Crown is the latest project to launch from Match Group, the operator of a number of dating sites and apps including Match, Tinder, Plenty of Fish, OK Cupid, and others.

The app was thought up by Match Product Manager Patricia Parker, who understands first-hand both the challenges and the benefits of online dating – Parker met her husband online, so has direct experience in the world of online dating.

Crown won Match Group’s internal “ideathon,” and was then developed in-house by a team of millennial women, with a goal of serving women’s needs in particular.

The main problem Crown is trying to solve is the cognitive overload of using dating apps. As Match Group scientific advisor Dr. Helen Fisher explained a few years ago to Wired, dating apps can become addictive because there’s so much choice.

“The more you look and look for a partner the more likely it is that you’ll end up with nobody…It’s called cognitive overload,” she had said. “There is a natural human predisposition to keep looking—to find something better. And with so many alternatives and opportunities for better mates in the online world, it’s easy to get into an addictive mode.”

Millennials are also prone to swipe fatigue, as they spend an average of 10 hours per week in dating apps, and are being warned to cut down or face burnout.

Crown’s approach to these issues is to turn getting matches into a game of sorts.

While other dating apps present you with an endless stream of people to pick from, Crown offers a more limited selection.

Every day at noon, you’re presented with 16 curated matches, picked by some mysterious algorithm. You move through the matches by choosing who you like more between two people at a time.

That is, the screen displays two photos instead of one, and you “crown” your winner. (Get it?) This process then repeats with two people shown at a time, until you reach your “Final Four.”

Those winners are then given the opportunity to chat with you, or they can choose to pass.

In addition to your own winners, you may also “win” the crown among other brackets, which gives you more matches to contend with.

Of course, getting dubbed a winner is a stronger signal on Crown than on an app like Tinder, where it’s more common for matches to not start conversations. This could encourage Crown users to chat, given they know there’s more of a genuine interest since they “beat out” several others. But on the flip side, getting passed on Crown is going to be a lot more of an obvious “no,” which could be discouraging.

“It’s like a ‘Bachelorette’-style process of elimination that helps users choose between quality over quantity,” explains Andy Chen, Vice President, Match Group. “Research shows that the human brain can only track a set number of relationships…and technology has not helped us increase this limit.”

Chen is referring to the Dunbar number, which says that people can only really maintain a max of some 150 social relationships. Giving users a never-ending list of possible matches on Tinder, then, isn’t helping people feel like they have options – it’s overloading the brain.

While turning matchmaking into a game feels a bit dehumanizing – maybe even more so than on Tinder, with its Hot-or-Not-inspired vibe – the team says Crown actually increases the odds, on average, of someone being selected, compared with traditional dating apps.

“When choosing one person over another, there is always a winner. The experience actually encourages a user playing the game to find reasons to say yes,” says Chen.

Crown has been live in a limited beta for a few months, but is now officially launched in L.A. (how appropriate) with more cities to come. For now, users outside L.A. will be matched with those closet to them.

There are today several thousand users on the app, and it’s organically growing, Chen says.

Plus, Crown is seeing day-over-day retention rates which are “already as strong” as Match Group’s other apps, we’re told.

Sigh. 

The app is a free download on iOS only for now. An Android version is coming, the website says.

 


Source: Tech Crunch

Judge says ‘literal but nonsensical’ Google translation isn’t consent for police search

Machine translation of foreign languages is undoubtedly a very useful thing, but if you’re going for anything more than directions or recommendations for lunch, its shallowness is a real barrier. And when it comes to the law and constitutional rights, a “good enough” translation doesn’t cut it, a judge has ruled.

The ruling (PDF) is not hugely consequential, but it is indicative of the evolving place in which translation apps find themselves in our lives and legal system. We are fortunate to live in a multilingual society, but for the present and foreseeable future it seems humans are still needed to bridge language gaps.

The case in question involved a Mexican man named Omar Cruz-Zamora, who was pulled over by cops in Kansas. When they searched his car, with his consent, they found quite a stash of meth and cocaine, which naturally led to his arrest.

But there’s a catch: Cruz-Zamora doesn’t speak English well, so the consent to search the car was obtained via an exchange facilitated by Google Translate — an exchange that the court found was insufficiently accurate to constitute consent given “freely and intelligently.”

The fourth amendment prohibits unreasonable search and seizure, and lacking a warrant or probable cause, the officers required Cruz-Zamora to understand that he could refuse to let them search the car. That understanding is not evident from the exchange, during which both sides repeatedly fail to comprehend what the other is saying.

Not only that, but the actual translations provided by the app weren’t good enough to accurately communicate the question. For example, the officer asked “¿Puedo buscar el auto?” — the literal meaning of which is closer to “can I find the car,” not “can I search the car.” There’s no evidence that Cruz-Zamora made the connection between this “literal but nonsensical” translation and the real question of whether he consented to a search, let alone whether he understood that he had a choice at all.

With consent invalidated, the search of the car is rendered unconstitutional, and the charges against Cruz-Zamora are suppressed.

It doesn’t mean that consent is impossible via Google Translate or any other app — for example, if Cruz-Zamora had himself opened his trunk or doors to allow the search, that likely would have constituted consent. But it’s clear that app-based interactions are not a sure thing. This will be a case to consider not just for cops on the beat looking to help or investigate people who don’t speak English, but in courts as well.

Providers of machine translation services would have us all believe that those translations are accurate enough to use in most cases, and that in a few years they will replace human translators in all but the most demanding situations. This case suggests that machine translation can fail even the most basic tests, and as long as that possibility remains, we have to maintain a healthy skepticism.


Source: Tech Crunch

Machines learn language better by using a deep understanding of words

Computer systems are getting quite good at understanding what people say, but they also have some major weak spots. Among them is the fact that they have trouble with words that have multiple or complex meanings. A new system called ELMo adds this critical context to words, producing better understanding across the board.

To illustrate the problem, think of the word “queen.” When you and I are talking and I say that word, you know from context whether I’m talking about Queen Elizabeth, or the chess piece, or the matriarch of a hive, or RuPaul’s Drag Race.

This ability of words to have multiple meanings is called polysemy. And really, it’s the rule rather than the exception. Which meaning it is can usually be reliably determined by the phrasing — “God save the queen!” versus “I saved my queen!” — and of course all this informs the topic, the structure of the sentence, whether you’re expected to respond, and so on.

Machine learning systems, however, don’t really have that level of flexibility. The way they tend to represent words is much simpler: it looks at all those different definitions of the word and comes up with a sort of average — a complex representation, to be sure, but not reflective of its true complexity. When it’s critical that the correct meaning of a word gets through, they can’t be relied on.

ELMo (“Embeddings from Language Models”), however, lets the system handle polysemy with ease; as evidence of its utility, it was awarded best paper honors at NAACL last week. At its heart it uses its training data (a huge collection of text) to determine whether a word has multiple meanings and how those different meanings are signaled in language.

For instance, you could probably tell in my example “queen” sentences above, despite their being very similar, that one was about royalty and the other about a game. That’s because the way they are written contain clues to your own context-detection engine to tell you which queen is which.

Informing a system of these differences can be done by manually annotating the text corpus from which it learns — but who wants to go through millions of words making a note on which queen is which?

“We were looking for a method that would significantly reduce the need for human annotation,” explained Mathew Peters, lead author of the paper. “The goal was to learn as much as we can from unlabeled data.”

In addition, he said, traditional language learning systems “compress all that meaning for a single word into a single vector. So we started by questioning the basic assumption: let’s not learn a single vector, let’s have an infinite number of vectors. Because the meaning is highly dependent on the context.”

ELMo learns this information by ingesting the full sentence in which the word appears; it would learn that when a king is mentioned alongside a queen, it’s likely royalty or a game, but never a beehive. When it sees pawn, it knows that it’s chess; jack implies cards; and so on.

An ELMo-equipped language engine won’t be nearly as good as a human with years of experience parsing language, but even working knowledge of polysemy is hugely helpful in understanding a language.

Not only that, but taking the whole sentence into account in the meaning of a word also allows the structure of that sentence to be mapped more easily, automatically labeling clauses and parts of speech.

Systems using the ELMo method had immediate benefits, improving on even the latest natural language algorithms by as much as 25 percent — a huge gain for this field. And because it is a better, more context-aware style of learning, but not a fundamentally different one, it can be integrated easily even into existing commercial systems.

In fact, Microsoft is reportedly already using it with Bing. After all, it’s crucial in search to determine intention, which of course requires an accurate reading of the query. ELMo is open source, too, like all the work from the Allen Institute for AI, so any company with natural language processing needs should probably check this out.

The paper lays down the groundwork of using ELMo for English language systems, but because its power is derived by essentially a close reading of the data that it’s fed, there’s no theoretical reason why it shouldn’t be applicable not just for other languages, but in other domains. In other words, if you feed it a bunch of neuroscience texts, it should be able to tell the difference between temporal as it relates to time and as it relates to that region of the brain.

This is just one example of how machine learning and language are rapidly developing around each other; although it’s already quite good enough for basic translation, speech to text and so on, there’s quite a lot more that computers could do via natural language interfaces — if they only know how.


Source: Tech Crunch

Apple and Oprah sign a multi-year partnership on original content

Apple announced today a multi-year content partnership with Oprah Winfrey to produce programs for the tech company’s upcoming video streaming service. Apple didn’t provide any specific details as to what sort of projects Winfrey would be involved in, but there will be more than one it seems.

Apple shared the news of its deal with Winfrey in a brief statement on its website, which read:

Apple today announced a unique, multi-year content partnership with Oprah Winfrey, the esteemed producer, actress, talk show host, philanthropist and CEO of OWN.

Together, Winfrey and Apple will create original programs that embrace her incomparable ability to connect with audiences around the world.

Winfrey’s projects will be released as part of a lineup of original content from Apple.

The deal is a significant high-profile win for Apple which has been busy filing out its lineup with an array of talent in recent months.

The streaming service will also include a reboot of Steven Spielberg’s Amazing Storiesa Reese Witherspoon- and Jennifer Anniston-starring series set in the world of morning TVan adaptation of Isaac Asimov’s Foundation books, a thriller starring Octavia Spencer, a Kristen Wiig-led comedy, a Kevin Durant-inspired scripted basketball show, a series from “La La Land’s” director and several other shows.

Winfrey, however, is not just another showrunner or producer. She’s a media giant who has worked across film, network and cable TV, print, and more as an actress, talk show host, creator, and producer.

She’s also a notable philanthropist, having contributed more than $100 million to provide education to academically gifted girls from disadvantaged backgrounds, and is continually discussed as a potential presidential candidate, though she said that’s not for her.

On television, Winfrey’s Harpo Productions developed daytime TV shows like “Dr. Phil,” “The Dr. Oz Show” and “Rachael Ray.” Harpo Films produced several Academy Award-winning movies including “Selma,” which featured Winfrey in a starring role. She’s also acted in a variety of productions over the years, like “The Color Purple,” which scored her an Oscar nom,  “Lee Daniels’ The Butler,” “The Immortal Life of Henrietta Lacks” and Disney’s “A Wrinkle in Time.”

Winfrey also founded the cable network OWN in 2011 in partnership with Discovery Communications, and has exec produced series including “Queen Sugar,” “Oprah’s Master Class,” and the Emmy-winning “Super Soul Sunday.

The latter has a connection with Apple as it debuted as podcast called “Oprah’s SuperSoul Conversations” and became a #1 program on Apple Podcasts.

Winfrey recently extended her contract with OWN through 2025, so it’s unclear how much time she’ll devote specifically towards her Apple projects.

Apple also didn’t say if Winfrey will star or guest in any of the programs themselves, but that’s always an option on the table with a deal like this. CNN, however, is reporting that Winfrey “is expected to have an on-screen role as a host and interviewer.”

 

 

 


Source: Tech Crunch

Kustomer gets $26M to take on Zendesk with an omnichannel approach to customer support

The CRM industry is now estimated to be worth some $4 billion annually, and today a startup has announced a round of funding that it hopes will help it take on one aspect of that lucrative pie, customer support. Kustomer, a startup out of New York that integrates a number of sources to give support staff a complete picture of a customer when he or she contacts the company, has raised $26 million.

The funding, a series B, was led by Redpoint Ventures (notably, an early investor in Zendesk, which Kustomer cites as a key competitor), with existing investors Canaan Partners, Boldstart Ventures, and Social Leverage also participating.

Cisco Investments was also a part of this round as a strategic investor: Cisco (along with Avaya) is one of the world’s biggest PBX equipment vendors, and customer support is one of the biggest users of this equipment, but the segment is also under pressure as more companies move these services to the cloud (and consider alternative options). Potentially, you could see how Cisco might want to partner with Kustomer to provide more services on top of its existing equipment, and potentially as a standalone service — although for now the two have yet to announce any actual partnerships.

Given that Kustomer has been approached already for potential acquisitions, you could see how the Ciscos of the world might be one possible category of buyers.

Kustomer is not discussing valuation but it has raised a total of $38.5 million. Kustomer’s customers include brands in fashion, e-commerce and other sectors that provide customer support on products on a regular basis, such as Ring, Modsy, Glossier, Smug Mug and more.

When we last wrote about Kustomer, when it raised $12.5 million in 2016, the company’s mission was to effectively turn anyone at a company into a customer service rep — the idea being that some issues are better answered by specific people, and a CRM platform for all employees to engage could help them fill that need.

Today, Brad Birnbaum, the co-founder and CEO, says that this concept has evolved. He said that “half of its business model still involves the idea of everyone being on the platform.” For example, an internal sales rep can collaborate with someone in a company’s shipping department — “but the only person who can communicate with the customer is the full-fledged agent,” he said. “That is what the customers wanted so that they could better control the messaging.”

The collaboration, meanwhile, has taken an interesting turn: it’s not just related to employees communicating better to develop a more complete picture of a customer and his/her history with the company; but it’s about a company’s systems integrating better to give a more complete view to the reps. Integrations include data from e-commerce platforms like Shopify and Magento; voice and messaging platforms like Twilio, TalkDesk, Twitter and Facebook Messenger; feedback tools like Nicereply; analytics services like Looker, Snowflake, Jira and Redshift; and Slack.

Birnbaum previously founded and sold Assistly to Salesforce, which turned it into Desk.com — (his co-founder in Kustomer, Jeremy Suriel, was Assistly’s chief architect), and between that and Kustomer he also had a go at building out Airtime, Sean Parker’s social startup. Kustomer, he says, is not only competing against Salesforce but perhaps even more specifically Zendesk, in offering a new take on customer support.

Zendesk, he said, had really figured out how to make customer support ticketing work efficiently, “but they don’t understand the customer at all.”

“We are a much more modern solution in how we see the world,” he continued. “No one does omni-channel customer service properly, where you can see a single threaded conversation speaking to all of a customer’s points.”

Going forward, Kustomer will be using the funding to expand its platform with more capabilities, and some of its own automations and insights (rather than those provided by way of integrations). This will also see the company expand into other kinds of services adjacent to taking inbound customer requests, such as reaching out to the customers, potentially to seel to them. “We plan to go broadly with engagement as an example,” Birnbaum said. “We already know everything about you so if we see you on a website, we can proactively reach out to you and engage you.”

“It is time for disruption in customer support industry, and Kustomer is leading the way,” said Tomasz Tunguz, partner at Redpoint Ventures, in a statement. “Kustomer has had impressive traction to date, and we are confident the world’s best B2C and B2B companies will be able to utilize the platform in order to develop meaningful relationships, experiences, and lifetime value for their customers. This is an exciting and forward-thinking platform for companies as well as their customers.”


Source: Tech Crunch

With its new in-car operating system, BMW slowly breaks with tradition

When you spend time with a lot of BMW folks, as I did during a trip to Germany earlier this month, you’ll regularly hear the word “heritage.” Maybe that’s no surprise, given that the company is now well over 100 years old. But in a time of rapid transformation that’s hitting every car manufacturer, engineers and designers have to strike a balance between honoring that history and looking forward. With the latest version of its BMW OS in-car operating system and its accompanying design language, BMW is breaking with some traditions to allow it to look into the future while also sticking to its core principles.

If you’ve driven a recent luxury car, then the instrument cluster in front of you was likely one large screen. But at least in even the most recent BMWs, you’ll still see the standard round gauges that have adorned cars since their invention. That’s what drivers expect and that’s what the company gave them, down to the point where it essentially glued a few plastic strips on the large screen that now makes up the dashboard to give drivers an even more traditional view of their Autobahn speeds.

With BMW OS 7.0, which I got some hands-on time with in the latest BMW 8-series model that’s making its official debut today (and where the OS update will also make its first appearance), the company stops pretending that the screen is a standard set of gauges. Sure, some of the colors remain the same, but users looking for the classic look of a BMW cockpit are in for a surprise.

“We first broke up the classic round instruments back in 2015 so we could add more digital content to the middle, including advanced driving assistance systems,” one of BMW’s designers told me. “And that was the first break [with tradition]. Now in 2018, we looked at the interior and exterior design of our cars — and took all of those forms — and integrated them into the digital user interface of our cars.”

The overall idea behind the design is to highlight relevant information when it’s needed but to let it fade back when it’s not, allowing the driver to focus on the task at hand (which, at least for the next few years, is mostly driving).

So when you enter the car, you’ll get the standard BMW welcome screen, which is now integrated with your digital BMW Connected profile in the cloud. When you start driving, the new design comes to life, with all of the critical information you need for driving on the left side of the dashboard, as well as data about the state of your driving assistance systems. That’s a set of digital gauges that remains on the screen at all times. On the right side of the screen, though, you’ll see all of the widgets that can be personalized. There are six of those, and they range from G meters for when you’re at a track day to a music player that uses the space to show album art.

The middle of the screen focuses on navigation. But as the BMW team told me, the idea here isn’t to just copy the map that’s traditionally on the tablet-like screen in the middle of the dashboard. What you’ll see here is a stripped-down map view that only shows you the navigational data you need at any given time.

And because the digital user interface isn’t meant to be a copy of its analog counterpart from yesteryear, the team also decided that it could play with more colors. That means that as you move from sport to eco mode, for example, the UI’s primary color changes from red to blue.

The instrument cluster is only part of the company’s redesign. It also took a look at what it calls the “Control Display” in the center console. That’s traditionally where the company has displayed everything from your music player to its built-in GPS maps (and Apple CarPlay, if that’s your thing). Here, BMW has simplified the menu structure by making it much flatter and also made some tweaks to the overall design. What you’ll see is that it also went for a design language here that’s still occasionally playful but that does away with many of the 3D effects, and instead opted for something that’s more akin to Google’s Material Design or Microsoft’s Fluent Design System. This is a subtle change, but the team told me that it very deliberately tried to go with a more modern and flatter look.

This display now also offers more tools for personalization, with the ability to change the layout to show more widgets, if the driver doesn’t mind a more cluttered display, for example.

Thanks to its integration with BMW Connect, the company’s cloud-based tools and services for saving and syncing data, managing in-car apps and more, the updated operating system also lays the foundation for the company’s upcoming e-commerce play. Dieter May, BMW’s VP for digital products and services, has talked about this quite a bit in the past, and the updated software and fully digital cockpit is what will enable the company’s next moves in this direction. Because the new operating system puts a new emphasis on the user’s digital account, which is encoded in your key fob, the car becomes part of the overall BMW ecosystem, which includes other mobility services like ReachNow, for example (though you obviously don’t need to have a BMW Connect account just to drive the car).

Unsurprisingly, the new operating system will launch with a couple of the company’s more high-end vehicles like the 8-series car that is launching today, but it will slowly trickle down to other models, as well.


Source: Tech Crunch

3,000 journalists covering Kim-Trump this week is WTF is wrong with media

Media businesses are in the dumper. Every week, we hear of new layoffs, budget cuts, diminished editorial quality, and more, way more. And yet, somehow, miraculously, more than 3,000 journalists managed to find the funds to travel to Singapore to cover the Kim-Trump Summit Extraordinaire this week.

How many journalists got to see the summit activity? From Politico: “Most notably, the number of American journalists allowed to witness the meeting between Trump and Kim was limited to seven — a smaller group than would usually be present for such a summit, and one that excluded representatives from the major wire services” (emphasis added).

It’s a huge news story, a major historical moment in the relations between the DPRK and the United States, and one that portends massive changes in that relationship going forward. The event should be fervently covered by the global press. Yet, 3,000 seems a stupendous number of people to cover an event so scripted and managed. Journalists watched from a warehouse and even got so bored, they started interviewing each other rather than, I don’t know, a source.

I notice this same dynamic watching the keynote videos of any of the top tech companies — there are hundreds if not thousands of journalists covering these events from the audience. Exactly how you build a unique story sitting there, beats me.

In media, one of the most critical qualities of a great story is salience — how important a story is to a particular audience. Tech readers want to know everything happening at an Apple keynote, just as much as the whole world is curious about what shakes down in Singapore. It makes sense to have a density of journalists to cover these events.

The problem in my mind is the sheer duplication of work, when the increasingly precious time of journalists could be spent on finding more differentiated or unique stories that are under-reported. In Singapore, how many English-language journalists needed to be there? How many Chinese-speaking or Korean-speaking journalists? I’m not suggesting the answer in aggregate is one each, but certainly the number should be fractions of 3,000.

Journalists taking pictures of a TV screen of Kim and Trump. How is this journalism?

I have given a lot of thought to subscription models in media the past few weeks, arguing that consumers are increasingly facing a “subscription hell” and fighting against the notion that paying for content should only be the preserve of the top 1%.

Yet, if we want readers to pay for our content, it has to be a differentiated product. This makes complete sense to every participant in industries like music, or movies, or books. Musicians may cover other artists, but they almost invariably try to perform original music on their own. Ultimately, without your own sound, you have no voice and no fanbase.

Nonetheless, I feel journalists and particularly editors have to be reminded of this on a regular basis. Journalists still cling to the generalist model of our forebears, rather than becoming specialists on a beat where they can offer deeper insights and original reporting. Everyone can’t cover everything.

That’s one reason why people like Ben Thompson at Stratechery and Bill Bishop at Sinocism have grown to be so popular — they do one thing well, and don’t try to offer a bundle of content in the same old way. Instead, they have staked their brands and reputations on their deep focus. Readers can then add and subtract these subscriptions as their interests shift.

The biggest block to improving this duplication is the lack of cooperation among media companies. Syndication of content happens occasionally, such as a recent deal between Politico and the South China Morning Post to provide more China-focused coverage to the U.S.-dominated readership of Politico . Those deals though tend to take months to hash out, and are often not ephemeral enough to match the news cycle.

Imagine instead a world where specialists are covering focused beats. Kim-Trump could have been covered by people who specialize in Singaporean foreign affairs (as hosts, they had the most knowledge of what was going on), as well as North Korea watchers and U.S.-Asia foreign policy junkies. Clearinghouses for syndication (blockchain or no blockchain) could have ensured that the content from these specialists was distributed to all who had an interest in adding coverage. No generalists need apply.

This isn’t an efficiency argument for further newsroom cutbacks, but rather an argument to use the talent and time of existing journalists to trailblaze unique paths and coverage. Until the media learns that not everyone can become a North Korea or Google expert overnight, we are going to continue to see warehouses and ballrooms filled to the brim with preening writers and camera teams, while the stories that most need telling remain overlooked.


Source: Tech Crunch

Reflections on E3 2018

After taking a year off, I returned to E3 this week. It’s always a fun show, in spite of the fact that the show floor has come to rival Comic-Con in terms of the mass of people the show’s organizers are able to cram into the aisles of the convention center floor.

We’ve been filing stories all week, but here is a very much incomplete collection of my thoughts on this year’s show.

Zombies are still very much a thing

I’d have thought we’d have hit peak zombie years ago, but here we are, zombies everywhere. That includes the LA Convention Center lobby, which was swarming with actors decked out as the undead. There’s something fundamentally disturbing about watching gamers get pictures taken with fake, bloody corpses. Or maybe it’s just the perfect allegory for our time.

Nintendo’s back

A slight adjustment in approach certainly played a role, as the company has embraced mobile gaming. But the key to Nintendo’s return was a refocus on what it does best: offering an innovative experience with familiar IP. Oh, and the GameCube controller Smash Bros. compatibility was a brilliant bit of fan service, even by Nintendo’s standards.

Quantity versus quality?

Microsoft’s event was a sort of video game blitzkrieg. The company showed off 50 titles, a list that included 15 exclusives. Sony, on the other hand, stuck to a handful, but presented them in much greater depth. Ultimately, I have to say I preferred the latter. Real game play footage feels like an extremely finite resource at these events.

Ultra violence in ultra high-def

Certainly not a new trend in gaming, but there’s something about watching someone bite off someone else’s face on the big screen that’s extra upsetting. Sony’s press conference was a strange sort of poetry, with some of the week’s most stunning imagery knee-deep in blood and gore.

Reedus ’n fetus

We saw more footage and somehow we understand the game less?

Checkmate

Indiecade is always a favorite destination at E3. It’s a nice respite from the big three’s packed booths. Interestingly, there were a lot more desktop games than I remember. You know, the real kind with physical pieces and no screens.

Death of a Tomb Raider

I played Shadow of the Tomb Raider on a PC in NVIDIA’s meeting space. It’s good, but I’m not good at it. I killed poor Lara A LOT. I can deal with that sort of thing when my character is in full Master Chief regalia or whatever, but those close-up shots of her face when I drowned her for the fifth time kind of bummed me out. Can video games help foster empathy or are we all just destined to desensitize ourselves because we have tombs to raid, damn it?

I saw the light

NVIDIA also promised me that its ray-tracing tech would be the most impressive demo I saw at E3 that day. I think they were probably right, so take that, Sonic Racing. The tech, which was first demoed at GDC, “brings real-time, cinematic-quality rendering to content creators and game developers.”

VR’s still waiting in the wings

At E3 two years ago, gaming felt like an industry on the cusp of a VR breakthrough. In 2018, however, it doesn’t feel any closer. There were a handful of compelling new VR experiences at the event, but it felt like many of the peripheral and other experiences were sitting on the fringes of the event — both literally and metaphorically — waiting for a crack at the big show.

Remote Control

Sony’s Control trailer was the highest ratio of excitement to actual information I experienced. Maybe it’s Inception the video game or the second coming of Quantum Break. I dunno, looks fun.

AR’s a thing, but not, like, an E3 thing

We saw a few interesting examples of this, including the weirdly wonderful TendAR, which requires you to make a bunch of faces so a fake fish doesn’t die. It’s kind of like version of Seaman that feeds on your own psychic energy. At the end of the day, though, E3 isn’t a mobile show.

Cross-platform

Having said that, there are some interesting examples of cross-platform potential popping up here and there. The $50 Poké Ball Plus for the Switch is a good example I’m surprised hasn’t been talked about more. Along with controlling the new Switch titles, it can be used to capture Pokémon via Pokémon GO. There’s some good brand synergy right there. And then, of course, there’s Fortnite, which is also on the Switch. The game’s battle royale mode is a great example of how cross-platform play can lead to massive success. Though by all accounts, Sony doesn’t really want to play ball.

V-Bucks

Oh, Epic Games has more money than God now.

Moebius strip

Video games are art. You knew that already, blah, blah, blah. But Sable looks like a freaking Moebius comic come to life. I worry that it will be about as playable as Dragon’s Lair, but even that trailer is a remarkable thing.


Source: Tech Crunch

Samsung announces a push for renewable energy

Samsung has announced that it will use 100 percent renewable energy for all its factories and offices in the U.S., Europe and China. This is the first time Samsung has announced a public commitment for renewable energy.

Greenpeace and environmental activists have been calling out Samsung for months as many tech companies have already started switching to renewable energy.

Samsung is starting by the parts of its organization that it can control more easily — its own buildings, factories and offices. According to Greenpeace’s press release, 17 of its 38 buildings are based in the U.S., Europe and China.

“Samsung Electronics is the first electronics manufacturing company in Asia to set a renewable energy target. This commitment could have an enormous impact in reducing the company’s massive global manufacturing footprint, and shows how critical industry participation is in reducing emissions and accelerating the transition to renewable energy. More companies should follow suit and set renewable energy targets, and governments should promote policies that enable companies to procure renewable energy easily,” Greenpeace campaigner Insung Lee said in the press release.

It won’t happen overnight. But these buildings will run on renewable energy by 2020. Samsung says that it could increase its use of renewable energy in other countries. In addition to that, Samsung is going to install solar panels in Gyeonggi province in South Korea.

Like many tech companies, Samsung also works with thousands of suppliers. So it’s not enough to use renewable energy for your own facilities. Samsung is starting small on this front and partnering with the Carbon Disclosure Project Supply Chain Program.

First, the company wants to identify the energy needs of its top 100 suppliers and help them move to renewable energy. This is a multi-year project, and it’s going to be important to regularly track Samsung’s progress on this front.

But it’s also good to see one of the biggest consumer electronics company in the world making strong commitments.


Source: Tech Crunch