Richard Whitt
11 min readJun 24, 2020

A Human-Centered Paradigm for the Web:

COVID-19, and Pathways to Our Digital Empowerment

(first in a monthly series)

“Certainty hardens our minds against possibility.” — Ellen Langer

Seeking Autonomous Futures

Abiding in the long shadow of a still-raging global pandemic, with its pernicious economic and societal fallout, there is an opportunity to pause and consider where exactly humanity stands. By all accounts, our many intertwined social systems are not serving most of us very well. In some cases — from contending with black swans,[1] to grey rhinos,[2] to the ordinary challenges of everyday life — these systems are failing spectacularly before our eyes.

The probable pathways heading into the future are not promising. Persistent risks to human health. Economic vulnerabilities and disparities. Systemic racial injustice. Cultural clashes. Political divides. And still in the offing, large-scale environmental disaster.

Looking through the lens of the current global pandemic, this series of articles will focus on one particular set of system challenges: our interactions with the World Wide Web. These articles will argue for replacing the Web’s current ethos of “extract and exploit,” with my conception of a new “HAACS” paradigm to enhance human digital experience:

HAACS = Human Autonomy/Agency, via Computational Systems

Each article will employ real-life scenarios, involving typical online and offline activities, to demonstrate concrete ways to better govern society’s interactions with our digital selves. In the process, we will explore enduring policy concepts of fiduciaries, trusts, and stewardship — and cutting-edge technologies like Personal AIs and symmetric interfaces.

Leveraging the COVID-19 Moment

The COVID-19 pandemic presents unprecedented global challenges, across countless economic, political, and social systems, both immediately and in the longer term. In the now, advanced digital technologies are being touted to ameliorate health impacts. As but one example, companies like Google and Apple propose tech solutions that use personal and environmental data, and advanced AI/computational systems, to detect and analyze useful medical screening information.

Many of these emerging technologies can seem innocuous, and even helpful, but they all involve accessing and using sensitive data about living, vulnerable human beings. Current examples include apps for symptom scanning/tracing (who is symptomatic?), contact tracing (who has tested positive?), antibody testing (who gets immunity certificates?), and health profiles (who is at an elevated risk?).

While the technical specs for many such apps are publicly available, notably absent is an explanation of the behind-the-scenes “human” infrastructure. Governance structures and processes are necessary to establish the duties of the entities involved, and the guardrails that protect the autonomy, agency, and privacy interests of individuals and their communities. In essence, governance is designed to respond to all the pressing questions — the whys, whos, whats, whens, and hows — of a particular technology’s use.

By failing to address that missing part of the infrastructure equation, tech companies and governments alike leave the rest of us with but two words: “trust us.” And the issue will not go away. Even (or especially) in a post-COVID-19 environment, what new norms and practices will be tolerated, or even embraced? What role, if any, will there be for all stakeholders?

Challenging Webs of Distrust

The convenience and coolness of our digital technologies can mask subtle forms of inequity. One pressing example is the advent of today’s Web platforms. Corporations and governments alike increasingly are subjecting each of us to one-sided methods of leveraging data. These systems typically are a mix of personal data about us, plus AI-based algorithms trained on us, plus interfaces that tend to hide more than they reveal. The purpose of all this impressive technology has become clearer with time — the ageless draw of power, control, and money.

Those entities flourishing most in these Web platform ecosystems have been perfecting what could be thought of as the “SEAMs” paradigm. Under this animating principle, these entities undertake four basic tasks as part of a constant feedback cycle:

· Surveil people through their devices and apps;

· Extract their personal data;

· Analyze that data for useful insights; and

· Manipulate the people for financial gain.

This SEAMs control cycle has helped fuel a monoculture of data monetization, treating individuals as mere users rather than people, often without gaining fully informed consent. In this landscape of “surveillance capitalism,” ranking algorithms can end up amplifying divisiveness and untruths, as the most controversial or extreme content produces the most clicks. This rapidly evolving ecosystem of data and algorithms and interfaces can capitalize on our underlying systemic failings, and entrench them even further.

So, what are we to do about it?

Many so-called realists likely reply, “probably nothing.” If anything, the largest tech platforms seem to be capitalizing on the moment to get even larger and more powerful. Perhaps at best we can muster some modest political or market reforms around the edges. Compel the platforms to be a bit more accountable, and a bit less harmful, and call it a day. According to this attitude, history says we should not hope to achieve anything more.

The present moment invites us to reexamine those Web-centric assumptions that have accreted over the last several decades. Holding these predominant Web platforms and their ecosystems more accountable for their practices certainly is a necessary objective — particularly in a post-COVID-19 world where our very health and medical data is vulnerable to misuse.

Nonetheless, greater transparency and accountability alone are not sufficient to alter our current trajectory. In order to effectively contest the SEAMs control cycles — Surveil, Extract, Analyze, Manipulate, then repeat — we need a new set of guiding principles we can call our own. We need an ethos to fuel the aspirational goal of building ecosystems that elevate humans, rather than subjugate them. We need to strengthen the autonomy and agency of ordinary human beings as they use digital technologies. We need to actively promote the best interests of users as actual people, as opposed to being a collection of data points.

Witnessing, with Power

In the stunning aftermath of the brazen murder of George Floyd, we can see glimmers of hope. In this instance, a democratized form of technology, a video app on a mobile phone, captured a single act of witnessing. The ability to share that act with others transformed 8 minutes and 46 seconds of horror — “I can’t breathe,’ white knee pressing into black neck — into a worldwide movement against government oppression of people of color.

We have seen in the recent past other such examples where the digital validation of events helped spur societal reforms. These moments tap into preexisting undercurrents of injustice, decades and centuries in the making. Without doubt, digital technology in the hands of ordinary citizens can act as an equalizer, fueling the right to question, and even challenge, institutional power.

The moment may at last be upon us for enduring change in at least some of our society’s racially-biased institutions. And yet, one must ask: what has taken so long for this “Enough!” moment? Why haven’t past instances of what some have come to call “sousveillance” — technologies of surveillance like mobile video cameras that are turned back on those in authority — failed to deter police and vigilante violence against Blacks, even today?

As Ethan Zuckerman recently has opined, “information only works when it’s harnessed to power.” Courageous acts of witnessing by themselves are not enough to compel a corrupt system to change. Such transformations must start with two kinds of power: the ability to gain robust knowledge of the underlying system itself, and the ability to take advantage of various levers of opportunity to transform it. Power is in the understanding, and in the action.

Surfacing the Systems

In both regards, systems thinking can provide useful tools. On the understanding front, the iceberg metaphor gives us one example.[3] The tip easily seen just above the waterline is the event level — individual and seemingly isolated acts. Over time, and below the surface, these events can be linked as more meaningful patterns of behavior. Eventually, we are able to expose the deeper underlying systems structures, which engender the patterns and events. And those systems in turn are frozen instantiations of people’s behaviors, past and present.

In this vein, what are the historic roots for something as pervasive as the existing SEAM paradigm? Shoshana Zuboff has articulated the role of “surveillance capitalism,” in the behavioral surplus of humans serving as a new means of production.[4] Tim O’Reilly blames a “Bizarro World” where companies seek to capture more value than they themselves produce.[5] Anand Giridharadas describes the makings of “MarketWorld,” and those with concentrated power who engage in only partial and self-preserving good deeds in the place of real change.[6]

Deeper, more expansive roots also can come into focus. The SEAM paradigm may simply be the digital era equivalent of the agricultural, industrial, and financial sector imperatives of market capitalism. To systems theorists, “Success to the Successful” is a common dynamic of social systems like capitalism, where wealth or power become concentrated in the hands of a few.[7] This framing can be brought as well into the modality of technology. For example, data feminism seeks to challenge the unequal uses and abuses of power in society, with a focus on how data science intersects with social, racial, and gendered inequalities.[8]

To other analysts, a logic of extraction and exploitation sees the world and its inhabitants as resources to be “coded, quantified, and rationalized to serve economic growth…”[9] The systems imperative there “is to grasp the inner connections that conduct flows of power, capital, and energy through the grid of capital accumulation — and in doing so to shed new light on the limits of that very grid.”[10]

Finding our Leverage

Donatella Meadows, the great teacher of complexity theory, says there are many ways to alter existing systems so that they “produce more of what we want and less of that which is undesirable.”[11] She charts out a dozen different kinds of leverage points to intervene in floundering systems. Examples include altering the balancing and reinforcing feedback loops (#s 7 and 8), modifying information flows (#6), and crafting new forms of self-organization (#4).[12]

However, Meadows notes, the single most effective approach is to directly challenge the existing paradigm — with its “great big unstated assumptions” — propping up a suboptimal system. We can do so in two ways: relentlessly pointing out the anomalies and failures of that prevailing paradigm, while also working with active change agents to construct the foundations of a new paradigm. As Meadows puts it, “we change paradigms by building a model of the system, which takes us outside the system and forces us to see it whole.”[13]

The energy for change is mobilized by firmly establishing a discrepancy between what people want, and where they are. A creative tension between the two allows stakeholders to resolve in favor of their aspirations.[14] Bridging the gap can occur through high-leverage interventions, including engaging new stakeholders and learning from experience.

Even the obvious limits of leveraging our way to the possible can give us room to create. Mark Taylor reminds us that a perennial misplaced promise of technology visionaries is that in the future, all will be possible.

Possibilities are inevitably limited by constraints that can never be overcome. The only viable freedom is not freedom from constraints but the freedom to operate effectively within them…. [Nonetheless, constraints] are not merely negative but can be productive; indeed, there are no creative possibilities without significant constraints. Constraints provide the parameters within with thinking and acting must occur.”[15]

One person’s constraint is another’s way to leverage real change.

Leverage points can become sources of power. As David Peter Stroh reminds us, the key is “t0 connect leverage points into a coherent path forward.”[16] If computational systems — of data, of AI, of interfaces — have become the fulcrum of the SEAMs-based Web, then that is where the leverage also resides. If our value is to be measured in bytes, those same technology tools can become implements each of us wields.

From Mistrust to . . . . Trustworthiness

As Rebecca Solnit has so marvelously chronicled, “Disasters provide an extraordinary window into social desire and possibility, and what manifests there matters elsewhere, in ordinary times and in other extraordinary times.”[17] In light of our shared crises, and with a heightened awareness of the systems that surround us, now is a time to rethink and reshape our world. We can consider how digital technologies can be designed to promote, and even enhance, our individual and collective humanity.

One common denominator seems to be that our fundamental freedoms as human beings are in real jeopardy — the thoughtful autonomy of our inner selves, and the impactful agency of our outer selves. Too often, our predominant social systems negate personal context, ignore mutual relationship, undermine more inclusive perspectives. They can constrain, more than they liberate.

Nonetheless, as this series will explore, a new ethos is possible, one more grounded in promoting the needs and aspirations of ordinary people. The HAACS paradigm can become the animating principle for a new generation of governance mechanisms, through promoting human autonomy and agency.

Perhaps, creating human infrastructures of trust for our current challenges paves the way for other forms of trust-building social systems. Perhaps, treating data as a source of mutual relational benefit frees it up for as-yet undreamed uses. Perhaps, putting tech in the hands of ordinary citizens gives them power to exercise in other domains of their lives.

Ellen Langer’s observation still rings true: we do have within us the wherewithal to do more. To challenge the supposed certainties of our time, seize the possibility of creating something better. That too is a lesson of history.

Conclusion: Harnessing HAACS over SEAMs

Real enduring systems change is difficult. Altering governing paradigms is immensely hard work. But perhaps enough of us are reaching tipping points, to begin demanding that more be done in the public interest. Our current predicament may yet instill in us a drive to discover, or create, new places to put our trust. A possibility of inventing futures that we all will want to live in.

As our lagging systems struggle and strain to maintain viability, stakeholders have a window of opportunity to plant a flag on a bold new vision of enhancing human power and control in the computational era. For those many intriguing possibilities that lie beyond — or even within — the current global pandemic, Rebecca Solnit’s words are especially salient:

Disaster sometimes knocks down institutions and structures and suspends private life, leaving a broader view of what lies beyond. The truth before us is to recognize the possibilities visible through that gateway and endeavor to bring them into the realm of the everyday.[18]

Together, we can help ensure the availability of those social infrastructures to support robust autonomy/agency for all human beings. Humans and machines and institutions then can exist together on a far more level playing field. With the bulk of the humans (more) firmly in charge.

*************************************

Next month: D>=A

For more information, please go to the GLIA Foundation website.

Supported by Omidyar Network

*******************************

[1] Nassim Nicholas Taleb, The Black Swan (2010) (describing highly improbable but big threats).

[2] Michele Wucker, The Gray Rhino (2016) (describing highly obvious but ignored threats).

[3] David Peter Stroh, Systems Thinking for Social Change (2015), at 36–37.

[4] Shoshana Zuboff, Surveillance Capitalism (2018), at 97.

[5] Tim O’Reilly, WTF — What’s the Future and Why It’s Up to Us (2017), at 249.

[6] Anand Giridharadas, Winners Take All (2018), at 11.

[7] Stroh, Systems Thinking for Social Change, at 60–61.

[8] Catherine D’Ignazio and Lauren F. Klein, Data Feminism (2020), at 12.

[9] Jason W. Moore, Capitalism in the Web of Life (2015), at 2.

[10] Moore, Capitalism in the Web of Life, at 7.

[11] Donatella Meadows, Thinking in Systems: A Primer (2008), at 145.

[12] Meadows, Thinking in Systems, at 153–157.

[13] Meadows, Thinking in Systems, at 163–164.

[14] Stroh, Systems Thinking for Social Change, at 73–74.

[15] Mark Taylor, The Moment of Complexity (2001), at 224.

[16] Stroh, Systems Thinking for Social Change, at 167.

[17] Rebecca Solnit, A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster (2009), at 6.

[18] Solnit, A Paradise Built in Hell, at 313.

Richard Whitt

Richard is a former Googler with a passion for making the open Web a more trustworthy and accountable place for human beings.