AI and Poker, here we are…

Facebook researcher Noam Brown made an AI that is able to beat professional poker players in a multi-pot game.

This is fascinating.

In March 2016, Google team AlphaGo defeated the world champion in Korea in the game of Go. That was an important accomplishment because Go has multiple orders of degrees of freedom over Chess. This means brute force computing wouldnt have been able to solve that problem easily, i.e. we would need to throw far more computing at it than was used.

Playing poker is yet another significant milestone. Poker is a game of incomplete information. A player doesnt know what another player holds. This is unlike Go or Chess where each player knows the layout of the board. So from an uncertainty point of view, Poker is complex. A multi-pot game is even more complex.

AI being able to play a game of incomplete information is important because that is what real life is – full of uncertainty and hidden information. Investing is another one of those games.

Why does Facebook want regulation?

The backlash against the Tech giants has hardly been out of the headlines in the  past two years. Google has been fined twice by the European regulator, Facebook has appeared in front of the US Senate to explain its involvement with Cambridge Analytica and the European Parliament has passed a copyright directive, making internet platforms liable for content their users upload.

A tumultuous two years was followed last month by Mark Zuckerberg calling for regulation in an article for the Washington Post. But why does Facebook want regulation?

The debate

A year ago, I discussed that most of the issues with various stakeholders can be summarised into three categories: a) Anti-competitive practices b) Ownership of content, and c) Compromised user privacy.

While the issue of anti-competitive acquisitions is straight forward to fix with greater scrutiny, the last two issues are intertwined and at conflict with each other. 

These issues arise not only because these digital platforms are maximising their economic returns, but also because of laws written back in the mid-1990s to sustain growth of the Internet. In the US, the law suggested “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because that law applied only if the information service provider was not the publisher nor played any editorial role, the platforms responded to the incentives and remained neutral to user generated content. Similar laws were passed elsewhere. This allowed the advertising based business model for the internet to scale.

Given how the internet has developed since these laws were established, these rules are out of date and recent events have demonstrated that they can be abused by ‘bad’ actors – state sponsored, extremists or individuals.

So how do we set new rules? There are two issues that need to be addressed:

The issue of scale

Let’s tackle this one first.  Facebook and YouTube both reach roughly two billion users, and every second a huge amount of content is added to the existing vast content library they host. In the case of YouTube, 400 hours of content is uploaded every minute. The challenge for the tech giants is how to moderate such a huge amount of content and if it is humanly possible.

If the moderation is automated, what is the balance to ensure that the tech giants are not censoring free speech or debates on topics that are deemed offensive?

While automation will be key to the solution, alongside human judgement,  it is unlikely to be without error. This means that such a solution will not be available imminently for the long term as it would require several iterations to get the balance right.

Conflicting requests

Content regulation and user privacy are difficult to reconcile at the same time. While governments increasingly want the digital platforms to be responsible for the content they host, the users want more privacy and control over their data (i.e. content on these platforms). How do you satisfy them both?

Governments have wanted a degree of control over the content being shared on social platforms. To achieve this, their answer is to make social platforms responsible for the content they host. A failure to comply could lead to fines for the company and/or imprisonment of executives. At the same time, we have heard users and privacy groups voice their concerns about control that platforms have over user data. They want to liberate it. Regulators, on the other hand, want more competition and to break the monopoly of these platforms over user data. However, if and when the data is liberated, it could be exposed to misuse, as was the case with the Cambridge Analytica scandal, or it could perhaps increase the chances of it getting hacked and ending up in the hands of ‘bad’ actors. If the data is encrypted to ensure it ends up only in the right hands, control over the data is potentially lost, which is not what governments want.

So, where do we go from here?

Given the breadth and reach of these platforms today, the tech giants now have a responsibility to distinguish between good and bad user generated content. However, that could lead to potential errors of judgement and the censoring of free speech. 

For a wide reaching and mature internet, we need new rules. Governments need to think long and hard before writing those new rules, as they will decide the internet of the future. Regulatory burden in most industries entrenches the market position of incumbents and makes it onerous for new entrants to compete. The governments need to balance the ‘red tape’ with the level of competition in the industry and consumer surplus they want to preserve. Why does Zuckerberg want regulation? He knows the internet needs new rules and the trade-offs that will be involved to satisfy everyone. Anticipating the issues that will become apparent further down the road, he is asking for a unified set of requests from different stakeholders that get more complex when you involve multiple governments. Standardised regulation is probably in everyone’s interest – governments, consumers, new entrants, and Facebook.

Not Investment Advice. This blog first appeared at

What drives Pinterest's growth over next 3-5 years?

The company recently filed the S-1 in preparation for its IPO, and disclosed the makeup of its revenues.

While most of the MAU growth is from International region, most of the revenue growth came from monetizing the US customer base. The run rate for which at the end of 2018 was ~$3/user/quarter. This compares to Facebook’s $35/user/quarter in 4Q18.

The question to answer for Pinterest is how far can it grow its monetisation of the American user?

Is regulation coming to a Technology near you?

This blog first appeared on the, here.
Are today’s technology titans exploiting their market position without any attention to broader stakeholder welfare? Essentially, this is the implicit question being discussed in most media today.
Given the recent news flow, it certainly feels like technology has had a rough start to the year. In the past few weeks, they have seen a marked underperformance given negative news flow: Zuckerberg in Washington, Uber‘s self-driving car crash, Tesla recall and President Trump’s tweets against Amazon. Although Google have managed to avoid this recent limelight, they were (not so long ago) slammed a EUR 2.4bn fine for allegedly breaching anti-trust rules.
In my view, these allegations fall in three categories: 1) anti-competitive practices; 2) ownership of content; 3) compromised user privacy. I will tackle the latter two in this post.
Teething problems or poor incentives?
Before looking at the specific situation today, it’s worth taking a step back. This is not something that is happening for the first time.
Technology, very broadly defined, is a tool that enhances productivity. Its development is often undertaken with that sole criterion in mind. This isolation from “real-life” helps innovation and progress. But when used broadly and introduced into the real world, this highly efficient tool faces two major issues a) the tragedy of the commons / prisoner’s dilemma[i] and b) availability to good and bad actors.
This is where we, as a society, agree to ‘norms of usage’. Nuclear technology is great for producing cleaner energy, but it can equally be used as nuclear warfare. Internal combustion engines are great to improve our mobility. As individuals, we have every incentive to make the most use of this technology to improve our mobility/productivity, but society, over time, agrees to norms of usage (such as CO2 emissions).
The internet is no different. It has given us a lot of value (in terms of content and tools) for free, in exchange for advertisements. At the same time, this has been abused by several bad actors – fake content and compromised user privacy are two issues arising from that abuse. Does that mean that the internet and today’s technology titans, riding its coat-tails are all bad? Or does that mean it may be high time to agree to ‘norms of usage’ of this technology? We can also call it regulation.
Part of the reason the bad actors got to where they are today is because of a policy of neutrality practised by these platforms. Why did they do so? A law from 1996 (Section 230, 1996 Communications Decency Act – “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”) that shelters internet intermediaries from liability for the content their users post. While this made sense in the early days of the internet to ensure innovation and growth, it makes little sense today for platforms, such as Facebook, to shirk responsibility for user content that they store.
technology and society
Will technology come to its own rescue?
Once the poor incentives (for platform neutrality) are adjusted, most likely, yes.
Facebook has repeatedly said that 99% of terrorist content is taken down (supported by Artificial Intelligence tools[ii]) before they are flagged by users. Other platforms are equally capable of using AI tools to flag objectionable content. Such automated response to moderating or taking down ‘bad’ quality content should surely help rebuild trust in these platforms again. This type of regulation (ownership of content) is increasingly being accepted by the likes of Facebook – with Mark Zuckerberg remarking it is ultimately responsible for content. Facebook plan to double the number of people working on cybersecurity and content moderation to 20,000 employees in 2018.
With respect to user privacy and identity, I believe there are two fixes. The first fix is behavioural. Increased scrutiny will make them more responsible. If in the past, Facebook did not restrict access to user data by rogue apps, after the Cambridge Analytica scandal, it will need to be more cautious of such lapses. This means giving users more transparency and control about how, and with whom, their information gets shared. The second fix, in my view, will come from technology itself – Blockchains[iii], and more specifically zero-knowledge proof Blockchains. In layman terms, this cryptography technology enables proving something (in this case user identity) without revealing any information that goes into the proof. This ensures full anonymization of user data, with no link to sensitive or identifiable user data. Combining them with smart contracts (a feature in Ethereum Blockchain) could give users full control over who can access what information. A central authority, such as a government, could issue these. For example, the World Food Program’s (WFP) Building blocks project already uses zero-knowledge Blockchain to dispense aid to Syrian refugees.
Regulation is a double-edged sword
At the end of the day, the regulators will have a difficult job. On the one hand, they want to hold platforms responsible for the content and be responsible with user data thereby creating norms of usage. On the other hand, regulation typically raises barriers to entry. It will make it difficult for smaller firms or new entrants to satisfy those regulatory requirements and possibly restrict their access to data that the behemoths, such as Facebook, already have. This could entrench the market position of the technology titans that it is trying to regulate. If data portability neutralizes platform power, it also exposes the data to abuse by bad actors.
There will be a tricky balance to strike between: safeguarding user abuse and limiting the platform’s market power.
Disclaimer: This is a discussion of broad technology trends and not investment advice. Any investment decisions made are your own and at your own risk. All views, opinions, and statements are my own.
[i] Prisoner’s dilemma is a paradox in decision analysis in which two individuals acting in their own self-interest pursue a course of action that does not result in the ideal outcome. The typical set up is where both parties choose to protect themselves at the expense of the other participant. As a result of following a purely logical thought process, both participants find themselves in a worse state than if they had cooperated with each other in the decision-making process
[ii] If I haven’t already driven home the point about technology’s dual nature, it is worth re-noting that these AI tools that can flag objectionable content can also be used (by bad actors) to create more fake content – text as well as videos.
[iii] Blockchain is a distributed ledger where transactions/activity/information can be recorded chronologically and publicly. Given the use of cryptography to encode the information and “chaining”, it is almost impossible to alter the data retroactively. Interestingly, Blockchain technology is also the biggest threat to established digital platforms due to its ability to democratize “trust” (through decentralizing the record).