fbpx
RISK Rituals by Dr. Richard Smith

Data at the Speed of Trust

Meta Platforms, the parent company of Facebook, announced a new AI Research Supercluster (RSC) that “will accelerate AI research and help us build for the metaverse.” According to Meta, once fully built it will be the fastest supercomputer in the world, and it’s already the fifth fastest.

Zuckerberg himself went on record with the Wall Street Journal saying, “The experiences we’re building for the metaverse require enormous compute power … and RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more.”

Clicking on a few of the embedded links from the official announcement turns up intriguing concepts like self-supervised learning, dark matter of intelligence, and intelligent generalist models.

Meta-Zuck surely has a lot invested in this new supercomputer, but his biggest investment by far is in getting you and me to believe that this is going to work, that it is inevitable, and that we better just go along with it. His biggest investment isn’t in the so-called “metaverse” (please let’s not let him define that term for us) but rather in what I would call Meta’s verse.

Whatever you do, just don’t peek behind the curtain.

First, let’s be clear that there is a curtain here – and it’s as thick as they come. I challenge anyone to read through the press releases and ensuing press coverage and find anything substantial demonstrating what this new RSC is accomplishing today.

I can find two claims. First is that it improves image recognition, and second is that it’s helping Facebook to do a better job keeping up with all of the hate speech on their platform. I can’t find any substantiation at all of this latter claim – only a lot of handwaving in the form of future impact.

Controlling hate speech hasn’t exactly been a winning topic for Facebook lately. We’ve seen repeatedly that Facebook profits from the polarizing power of hate speech and is in no hurry to do anything meaningful about it. An AI supercluster is another delay of game. “Just give us more time … the AI cavalry is coming.”

Remember, it’s already the fifth fastest supercomputer in the world. They want us to believe that something magic is going to happen when it becomes the fastest and Meta-Zuck finally has the “biggest supercluster”? That game has been going on in AI for over half a century.

No thanks. Time’s up. When you know what needs to be done and you don’t do it, you don’t get any more chances until you do the right thing first. I love the way Professor Scott Galloway puts it – no mercy/no malice. Let’s move on.

Besides questionable claims and motives, there’s another, less well-known and even more devastating secret behind the big-tech AI curtain. It’s not nearly so mysterious and inscrutable as the “dark matter of intelligence” either (Meta’s term, not mine). In fact, it’s shockingly pedestrian: Their data isn’t up to the task, and it never will be.

It doesn’t matter how big and fast your “supercluster” is if you don’t have the high-quality data you need. You can’t spin data straw into data gold, no matter how big and efficient you can make your factory of robotic data looms.

There’s an acronym in the computer science world for this. It’s GIGO – garbage in, garbage out. “Garbage” is a strong word, and I don’t mean to imply that big tech doesn’t have any useful data. They have plenty of compelling data, but it’s limited in its scope and it will never suffice as the basis for the strong AI claims that Meta and others are hoping we will all get glassy-eyed over (while also hoping we just forget about the elevated suicide risk from excess social media time for young teen girls).

At the root of big tech’s data quality issue is the adversarial scorched-earth and resource-extraction approach that they have taken to data acquisition. We have covered this aspect of the current big-tech data-extraction business model many times here in this newsletter. It boils down to using polarizing content to cultivate addiction and drive “engagement,” together with a lack of transparency about where exactly their algorithms are designed to take us.

Until we get past this cloak-and-dagger approach to big data (and big media) we will never unlock the deeper levels of value that are latent in all this new technology.

We don’t need another supercluster that processes ever more bad data that was harvested in an adversarial environment of mistrust. We need better data and more transparency. We need data that moves, in Stephen M.R. Covey’s famous phrase, at the speed of trust.

When I look back on the Covid pandemic, for example, I see a missed opportunity. The politicization of Covid and the mistrust it engendered across society and even within families were devastating, costly, and counterproductive. The whole (mis)management of Covid started from a position of mistrust and, predictably, produced more mistrust.

Just as in the last newsletter we asked what, for example, Apple had to gain from anxiety, we have to ask ourselves what our leaders today have to gain from this propagation of mistrust? These kinds of questions are harder and harder to avoid when we know the means already exist to build more trustworthy and effective solutions.

The technology exists today that could have enabled an open-source, private, secure, and truly data-driven approach to navigating the Covid pandemic. Institutions could have shared data openly and securely while safeguarding their economic interests. Individuals could have shared data in a way that gave them confidence in the privacy and integrity of their own data and the protection of their civil liberties.

My friend and colleague Zohar Hod, founder of the embedded privacy platform One Creation (in which I am an investor), calls it “trust through control.” 

I couldn’t agree more. Today we don’t trust big tech (or big anything for that matter) for good reason – because we have no control over our data or their algorithms. Moreover, big tech, big government, and the mass media don’t trust us either. They want to control the narratives that reach us rather than letting us truly enjoy free speech and an open and innovative society.

The way that Zohar sees it, valuable data rights can and should be managed in the same way that song rights are managed on Spotify. Spotify is a platform where content creators can publish their content and content consumers can pay for and acquire limited rights to that content.

In the end, what’s the technical difference between a valuable digital song and a valuable digital data set? Nothing. The technology exists today for businesses and institutions to share and monetize their valuable data sets in a much more open and transparent manner – and get handsomely rewarded for doing so.

Similarly, on the individual/consumer side, the technology exists that allows consumers to aggregate, secure, and share their own personal data with unprecedented control. The long-time leader in personal data privacy and control is Digi.me (another company that I invested in and now sit on the board).  

Julian Ranger, founder of Digi.me, has been a strong advocate for human centricity when it comes to data architecture. Julian makes the “doh” obvious point that the only way that we are going to have truly rich data is when the data is owned and controlled by the individuals who generate it. That’s us.

Here’s how he puts it with watertight logic:

  1. Only the individual knows all their data sources – doctors and hospitals they have visited, bank accounts they have, social networks they use, etc.
  2. Only the individual has rights of access to a full copy of their data.
  3. Only the individual has unlimited usage rights to their data.
  4. Therefore, aggregation of RICH DATA can ONLY occur at the level of the individual.

In other words, a human-centric data architecture is the only way that we are ever going to get truly rich data with which we can solve real and meaningful problems.  

Case closed.

I don’t care how many different businesses Amazon can buy or create. I don’t care how big Meta’s supercluster is. I don’t care how many devices and services Apple can weave into our lives. None of them are ever going to be able to aggregate the individual digital profiles that can and will be possible with a human-centric data architecture.

We are much closer today than ever to widespread recognition that we need a new model for a free, prosperous, and innovative data economy. Trust is the key, and there is a growing weariness and fatigue of the mistrust that underpins much of our economy today.

Here’s what I think needs to happen:

  1. We need to recognize that the data being collected in the legacy big-tech centralized data economy is too siloed and biased and is not sufficient to solve our most urgent problems. (Who wins the online gaming and self-driving car wars are not our most pressing problems.)
  2. We need “trust through control” at both the individual level as well as at the corporate/institutional level. 
  3. We need a two-sided marketplace of sovereign data consumers and vendors.
  4. We need an algorithm marketplace where we can see what the algorithms are optimizing so that we can decide what algorithms are most valuable to us.
  5. We need a micropayments infrastructure that allows for an exchange of value around data at fractions of a penny.
  6. We need critical consumers who aren’t going to be distracted by things like Meta’s supercluster or Apple’s false promises of “Endless joy for all.”

There’s simply too much value to be unlocked for a more trust-based data economy to not happen. A new data economy moving at the speed of trust is coming and it is inevitable.

As we’ve discussed many times here, the big-tech house of cards all hinges on our consent. It’s our money. It’s our attention. We get to choose where to invest our resources, and our choices will influence the future we leave behind for our children and grandchildren.

I hope you’ll join me in investing in a better technology future in whatever way we can. Once enough of us start to say, “Thanks but no thanks,” the house of cards will start to come down. Maybe big tech will even begin to turn their prodigious capabilities to helping create a new and better digital world. That’s not going to happen, however, just by wishing for it.

P.S. If you’re an accredited investor and are interested in opportunities to invest in innovative privacy-focused startups, I’d be happy to share more information with you on the opportunities that I’m seeing. You can email me at hello@drrichardsmith.com.

Newsletter Archive