From Thurii to Quayside: Creating Inclusive Digital Communities

Richard Whitt
19 min readOct 22, 2020

Series: A Human-Centered Paradigm for the Web

Article Five (of Six)

“We can be controlled from the outside not simply by having our choice bypassed but by someone controlling the world we perceive.” — Maria Brincker

Series Recap: Our story so far . . .

Over the first four articles in this series, we have discussed the desirability of replacing the Web’s current SEAMs cycles (surveillance, extraction, analysis, manipulation), with a new HAACS ethos (enhancing human autonomy/agency, via computational systems).

· Article one highlighted the dynamics of fostering large-scale systems change in our social institutions.

· Article two introduced us to Carla, a typical Web user, and showed how “Userhood” and SEAMs cycles (surveillance > extraction > analytics > manipulation) reduce her agency in the digital world.

· Article three described how entities acting as personal digital fiduciaries, along with tech tools like Personal AIs, could greatly empower Carla and other Web end users.

· Article four included an examination of another fiduciary-based institution, the data trust, and proposed ways it could be adapted for health data.

In This Article . . .

We now introduce a second example of a data trust, the community data trust, as a potential means of managing “smart cities” in an inclusive manner. We will also see how new IoT (Internet of Things) experiments may help improve accountability and agency in our governance of technology.

And we’ll see how the phrase “screens, scenes, and unseens” represents the billions of devices that increasingly surround our physical and virtual spaces.

The “screens, scenes and unseens” in Carla’s life.

Carla’s Covid Journey”

After weeks of bed rest, Carla finally recovers from her illness, and is ready to reenter the physical world of her small city. As she walks through town, she passes by a number of facial recognition cameras, and new contact tracing beacons. In office building lobbies, she encounters guards deploying temperature sensors and “whitelisting” apps to determine if she has an immunity certificate.

Computer screens and sensors seemingly are popping are everywhere — but who, Carla wonders, is actually running these systems? And who, if anyone, is ensuring that the personal and environmental data being collected and sifted and utilized reflects Carla’s best interests and concerns? In other words, what governance structures are in place to amply represent Carla?

Sidewalk Labs and Smart Cities

Planned communities became more prevalent in the United States beginning in the 1950s. It is with the so-called “smart city,” however, where the technology of the Internet of things (IoT) is expected to bring the planned community to a whole new level. By one definition, smart cities use a mix of connected technology and data to “(1) improve the efficiency of city service delivery (2) enhance quality of life for all, and (3) increase equity and prosperity for residents and businesses.”

A somewhat broader definition would be a digital community, an existing place that is amplified, elevated, and served by digital technologies. Many such digital communities — such as Barcelona, Amsterdam, and others — are premised on harnessing connected technologies to help manage pre-existing common areas, particularly in larger municipalities. Examples of popular use cases include automotive traffic control, air quality sensing, street light controls, waste management, and noise detection — including the sounds of gunshots.

An early pioneer of the smart city concept was Alphabet’s Sidewalks Labs project in Toronto, Canada. As first announced publicly in October 2017, Sidewalk Labs was to construct entirely new physical and virtual spaces in the Quayside neighborhood. This approach differed from most other “smart cities,” where the virtual would overlay preexisting physical infrastructure. The Quayside project carried the potential to provide benefits to citizens and visitors that included enhanced security, environmental monitoring, and more efficient deployment of government resources.[1]

The Sidewalk Labs Quayside layout

As the Quayside project unfolded, questions arose about its proposed governance, and the use of IoT technologies to gather and analyze data. To some critics, the project introduced an unhelpful blending of the roles of governments and corporations, without a better reckoning of the necessary differences between the public and private sectors. As we will see, this apparent confusion in turn led to Sidewalk Labs employing shifting theories of the types of institutions that would actually run the project. Others expressed concerns about the project’s unveiling of a novel and untested concept of what was termed “urban data.”

In May 2020, project director Daniel Doctoroff announced that Sidewalk Labs was shutting down its Quayside project. He cited “unprecedented economic uncertainty” in the Toronto real estate market arising from the COVID-19 pandemic.

The Quayside project leaves both some open issues to explore, and useful insights to be gleaned. In this article, we’ll look briefly at two planned cities — one quite ancient, one very modern — as exemplifying the different physical and virtual layers of living well together. Hopefully these can guide our approaches to creating viable digital communities.

Thurii: balancing governance, cityscapes, and civic discourse

In 444 BC, Pericles the leader of Athens directed that a small group of Athenian citizens converge on the remains of the small settlement of Sybaris, on the coast of the Italian peninsula. There, according to the historian Diodorus Siculus, was founded a pan-Hellenic colony called Thurii (in modern day Calabria), presided over by representatives from ten tribes from all over Greece. Thurii was the first planned “city of the world,” built with the intention of blending and balancing principles of governance, architecture and education.

Coin from the City of Thurii

Author David Fleming has developed an interesting twist on the story of Thurii.[2] His concern is “not so much the facts surrounding the town as the idea behind it, the vision of a good society that seems to have motivated it.” In Fleming’s telling, the town was planned as a model city incorporating three core design principles:

· a democratic constitution (governance)

· an “open,” orthogonal street layout (cityscape)

· a rhetorically-designed educational system (civic discourse)

Fleming argues that Pericles the political leader, Hippodamus the city architect, and Protagoras the lawmaker shared a common image for Thurii: “an autonomous community of free and equal citizens who would govern themselves through their own practical human capabilities — that is, through speaking, writing, and debating with one another.”[3] This image would play out in crafting the new city’s constitution, forming its educational system, and designing its built spaces. To Fleming, Thurii stands for the proposition that “a free, open, and well-functioning democracy depend[s] on those interconnections.”[4]

The goal of the Thurian enterprise was simple yet profound: to establish an inclusive global city, based on the best political, architectural, and educational precepts of that time. Of course, democracy in those days ran narrow (limited to free adult males), and deep (considerable civic participation in many fora). Similarly, we should not necessarily embrace the notion of three Greek men standing in for the entire demos, to decide on the future outlines of a new city. Their collective vision, while innovative for the times, was inherently limited.

Per Fleming’s suggestion, however, we should focus less on Thurii as a constructed reality, and more on what Thurii can represent for modern ears. Our aim here for digital communities is similar: to explore balanced ways of co-creating blended public spaces, and manage how human beings (and their data) can interact and flow through such spaces. Ultimately, we seek to consider what a holistic, cross-disciplinary enterprise like Thurii can mean for the very concept of a modern digital community.

In the next several sections, we briefly look at the ways that personal data flows between end users and institutions, utilizing a blend of “screens, scenes, and unseens.” As we will see, designers of smart cities may well assume that the same constraining aspects of “userhood” and SEAMs cycles we all experience online should simply expand unabated into the “offline” world of digital communities. That assumption cannot stand unchallenged.

Cloudtech data flows: lack of inclusion = lack of balance

Technology mediates between human beings and our experiences, often via software-based interfaces.[5] These amount to different kinds of points of presence — physical, virtual, or conceptual — at boundaries where information signals can flow between different systems.[6] As we interact with the Web over our mobile devices, for example, we straddle many entities’ virtual borders, often without even realizing it.

To software designers, robust feedback between people is supposed to be “the keystone of the user-friendly world.”[7] Problems emerge, however, when the end user lacks an opportunity to provide feedback, while the Web side does not adequately represent the end users’ interests. In product design terms, the two sides are “not feeling the stakes.”[8]

Unfortunately, these issues of imbalanced information flows are pervasive on the Web. Computational systems deploying SEAMs cycles — what could be thought of collectively as a form of “cloudtech” — seek to maximize extraction of data and user engagement, on their terms. Most of the data typically is pulled from the end user, while shaping influences are then pushed their way. These pronounced systemic pulls-and-pushes are, in a word, unbalanced. One way of achieving greater balance is to design more inclusive and symmetrical computational systems.

Cloudtech data portals in our lives: “screens, scenes, and unseens”

Every day, Carla and the rest of us interact with computational systems in three ways, envisioned here as “screens, scenes, and unseens.” In each instance, the interfaces tend to occlude more than they present.

  • Online screens on Carla’s various devices provide access to the search engines and social media platforms, and countless other Web portals in her life. As we saw in a previous article, Institutional AIs render recommendation engines that guide Carla to places to shop, or videos to watch, or news content to read. In each case, those behind the screens are making the decisions on Carla’s behalf.
  • Environmental scenes (sensors) are the “smart” devices — cameras, speakers, microphones, sensors, beacons, actuators — scattered throughout our homes, offices, streets, and neighborhoods. These computational systems gather from these gateways a mix of personal (human) and environmental (rest of world) data. They are the “eyes and ears” of increasingly complex monitoring and analysis systems. The Ring doorbell placed by Carla’s neighbor across the street is but one example.
  • Bureaucratic “unseens” are computational systems hidden behind the walls of governments and companies. These scoring systems can render hugely life-altering judgments about Carla’s basic necessities, and personal interests — including who gets a job or gets fired, who is granted or denied a loan, and who receives what form of healthcare.

Each system plays a primary role in generating and managing the data that can flow through Carla’s life, and her community. How and why and where and when data currently is gathered and sifted and utilized, however, raise many crucial questions. In particular, is the decision-making process equitable, and is the handling of the personal and environmental data accountable? As we will see, participatory governance structures like a community data trust, complemented by inclusive software agents, may be one answer to help rectify the power imbalances.

Receding interfaces, hidden power

The most profound technologies are those that disappear.
-Mark Weiser

In Thurii, the principle was to democratize the shared common spaces, so that each citizen was an equal who could mingle freely with others. Hippodamus the city planner designed streets that in his mind reflected “absolute equality among residential blocks.”[9] Similarly, Periclean oration paints a picture of Athens as a polis “where people can come and go as they please without surveillance from an inaccessible and mysterious hilltop.” A place, as Pericles puts it, “where the gaze of the many is directed to only a few.”[10] And not the other way around.

In a digital community, the issue is that those with “data power” can use it to establish interfaces as virtual control regimes.[11] Interfaces can become gateways to power, largely because of all the data that flows behind the “screens, scenes, and unseens.” Not merely technical portals; “in the user-friendly world, interfaces make empires.”[12]

As it turns out, over time interface technologies tends to evolve from the more to the less visible (or even hidden) forms. What once was an obvious part of the user’s interactions with a system, gradually becomes embedded in local environments, and even vanishes altogether. As computer scientist Mark Weiser put it nearly 30 years ago:

“the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”[13]

With those “cloudtech” interfaces, the tradeoff for humans is straightforward: exchanging control, for more simplicity and ease. In these contexts, technology moves from being a tool, to becoming its own agent of the underlying system. While interfaces can remove friction, at the same time they can foreclose thoughtful engagement. When you reduce participation, you reduce involvement in decision-making. While this progression in itself may well bring many benefits, it also renders more muddled the motivations of the system operating silently from a distance.

Human engagement with these receding interfaces also becomes less substantive. From typing on keyboards, to swiping on screens, to voicing word commands. — the interface context shapes the mode and manner of the interaction. At the same time, these systems can conjure the illusion that they still support human agency. From the perspective of the average person, interfaces to these systems can seem deceptively controllable — local, physical, and interactive — even as the mediating processes themselves are far-removed, virtual, and unidirectional.


This lack of symmetry and inclusion become all the more acute in the digital community environment. In addition to environmental data, these devices can register and collect a vast range of biometric information about the self — from one’s geolocation, facial expressions, voice patterns, even walking gait.

And yet, merely by walking through a sensors-laden physical space, an individual is assumed to accept their presence and operation — with no realistic opt-out. As one European report has amply detailed, the user’s loss of control in digital public spaces is manifold — including the inability to consent, or object, to data surveillance, collection, and processing. In systems parlance, the feedback loops of these physical spaces become even more attenuated, or disappear altogether. Traditional accountability concepts, like notice and choice, can become meaningless in these environments.

Nor is there an actual living entity with which to engage. In the typical digital community, drivers, pedestrians, and others at most may receive some transparency in how systems make use of data, and some accountability in how systems safeguard such data. And yet, the individual has no place in that decision tree. There is no obvious opportunity to engage, to question, to negotiate, to challenge, to object, to seek recourse — in other words, to exercise one’s personal agency. Without such mediating processes in place with the underlying “rules of the road,” and interfaces unable to accept and act upon such mediations, there is no viable way to opt out of the system’s prevailing SEAMs control cycles.

Two lessons from Sidewalk Labs: pathways to more balanced data flows, and to more inclusive governance

What useful takeaways can be derived from the Sidewalk Labs project in Toronto? At least two interrelated conversations are worth continuing. First, more human-centric technology deployments should include opening up the front-end of the software interfaces. Second, more human-centric institutional controls should include opening up the back-end of the project governance. Optimally, providing balanced processes of interactions within these two forms of human-to-system interfaces can be devised and implemented in concert.

A DTPR Snapshot: Agential software interfaces

In 2019, Sidewalk Labs publicly launched the open source Digital Transparency in the Public Realm (DTPR) project. The DTPR team was tasked with creating icons and signage that would allow pedestrians to understand what kind of function was being employed by a particular environmental device.[14]

As the project heads acknowledged, cities like Boston and London “have already taken important first steps by posting clear signage whenever they employ digital technologies in the public realm.” An early component proposed by the DTPR team used its own comprehensive “consent through signage” system to inform citizens about data collection practices. Citizens then face a decision: remain on the scene, which indicates consent, or withdraw consent by departing the scene.[15]

Sidewalk Labs’ signage system to help explain invisible sensors: icons denoting purpose (in black), data type (blue for de-identified, yellow for identifiable), accountable organization, and links to digital channels to learn more.

Needless to say, while such an approach by itself may heighten transparency, at the same time it grants ordinary citizens little recourse. How can one gain the benefits of truly belonging to a digital community, without giving up control over access to one’s personal data?

To DTPR’s credit, its initial focus on transparency — informing pedestrians about the “what” of a sensor’s activity — shifted quickly to a phase two. This phase was devoted to engendering greater accountability for the underlying system’s actions. As part of this phase, the DTPR project team gave a concerted outreach to designers and others to “advance digital transparency and enable agency.”

In the last few months before the Quayside project was terminated, the DTPR team went further still. Using co-design sessions, charrettes, small group discussions, and prototyping, the team sought to investigate opportunities for actual human agency — in particular, direct human-to-interfaces interactions within the sensors system.[16] Intriguingly, prototypes for conversational chatbots and Personal AIs were introduced, discussed, and tested for feasibility. As the team summarized:

The chatbot supports visual, auditory, and tactile modalities, makes it easy to find different kinds of information, provides links, schematics, or documentation, and can adapt to the user’s level of interest in detail…. We asked charrette participants to imagine that five years in the future, they have a personal digital assistant provided by an organization they trust (such as a bank), that provides automated data/privacy information tailored to an individual’s preferences. We also shared the results from our GRIT user tests on how research participants responded to that concept. We explored how that digital personal assistant, in the form of a chatbot, could provide answers about systems and places in a standardized manner, using the DTPR taxonomy. We wanted to see how this concept could encourage users to develop expectations around transparency and accountability of spaces, provide a flexible way for users to interact with a physical space and the digital technology within it, and adapt and learn as users asked new questions.

The DTPR team also shared out the insights they gleaned from their user interviews on the feasibility of personal digital assistants:

· “Concept feedback sessions showed the desirability of a trusted digital assistant to help with daily tasks.”

· “People want to ask questions at a time and context that is convenient to them, not be interrupted mid-flow.”

· “Trust varies person by person, case by case; there is no ‘one size fits all’ approach.”

The “agency” phase of the DTPR project offered some fascinating prospects. If successfully pursued, creating these kinds of interactive IoT systems can open up real opportunities for humans to engage on their own terms as they go about their lives in digital communities. The DTPR team also included regular outreach to actual people, to garner their reactions to a variety of design options.

As it turns out, DTPR remains very much alive. The open source project is now being stewarded by an emerging coalition of organizations led by Helpful Places, whose co-founders steered DTPR during their tenure at Sidewalk Labs. The project is actively seeking coalition partners, from funders to developers to researchers, to further DTPR’s development. The team also has begun piloting real-world opportunities, including collaborating this past summer with the City of Boston to further promulgate the DTPR communication standard. The “new” DTPR — which now stands more appropriately for “Digital Trust for Places and Routines” — even has a five-year vision for increasing accountability and enabling new forms of personal agency in digital communities.

Various learnings squarely teed up by the “front-end” DTPR project point to the potential for more inclusive “back-end” entities to manage a community’s data flows. Unfortunately, the governance side of the project didn’t appear to be drawing the same lessons about community-driven design.

Designing inclusive governance: community data trusts

As we saw in the previous article, the data trust offers one particular governance model to engender greater trust and accountability. In theory at least, the data trust concept can be applied as well in a community setting. Among other benefits, the trust mechanism can be designed to tap into the benefits that come from sharing and utilizing data from various sources.

An early proponent of the “civic trust,” Sean McDonald, has explained how the model uses the common law of trusts to build public participation spaces. Specifically, the civic trust embeds network governance into the way that technology products evolve. The public is the trust, the technology company is the licensee, and stakeholders can include users, investors, and the public at large. An independent organization would own the code and data resources, which third parties in turn could use and adapt. The Civic Trustee would ensure that the public has a meaningful voice, as well as foster the integrity of decision-making processes.

There are few and limited examples globally of civic trusts. As McDonald observed, the Toronto project to date was the largest scale proposed such model. One key takeaway from the Quayside experience is the need to test out whether the theory can become viable in reality.

The project’s publicly-stated goal was lofty: its “proposed approach to digital governance aims to serve as a model for cities around the world.” As it turns out, the governance structure never settled down long enough to be implemented. Over some eighteen months, Sidewalk Labs began exploring first the creation of what it labelled a “data trust,” and then a “civic data trust,” before ultimately adopting the nomenclature of an “urban data trust” (UDT). Crucially, Sidewalk Labs itself made clear that the UDT model would not be a trust in a legal sense — meaning, among other changes, there would be no adoption of express fiduciary duties to trustors. Not surprisingly, the shifting governance approaches attracted public resistance, including from some associated with Waterfront Toronto itself.

While Sidewalk Labs garnered praise for making its proposals public, one critic deemed the project to be “riddled with contradictions,” including conflicting theories of control over data. The new concept of “urban data” — meaning data that is collected in public spaces, and treated as a type of “public asset” for sharing — also drew criticism. [17] Element AI further critiqued what it saw as a top-down governance model introducing power imbalance. By comparison they pointed to the bottom-up model championed by Delacroix and Lawrence, where users could collectively pool their own data. Finally, as an overarching matter of process, Sidewalk Labs’ seemingly reactive alterations to the trusts-based governance models likely were unhelpful as well to the public deliberations.[18]

The Sidewalk Labs project highlights the challenges, and limitations, of developing a comprehensive community system of sensors and interfaces, without adequately inclusive governance. The Quayside project’s ultimate demise was unfortunate in at least one respect: it precluded a more open conversation about the precise mechanisms and processes that could comprise a successful civic data trust.

For example, the civic/urban trust could have been devised so that a citizen’s own digital intermediary would be empowered to interact directly, on her behalf, with the Sidewalk Lab computational systems. Such interactions could have been facilitated through the very chatbots and Personal AIs that were being explored in parallel via the project’s DTPR process. In essence, the back-end of trust governance could have benefited from more fruitful connections with the front-end of sensor interface technologies.

Community Data: another opportunity for digital fiduciaries

As we saw in the previous article, the digital fiduciary concept can help spur adoption of the data trust model. In particular, a personal digital fiduciary can help clients explore the brave new world of data trusts.

In this case, as Carla goes about her daily activities in her local cityscape, a personal digital fiduciary can provide her the technical means of interacting in real-time with a community data trust, and other digital systems that she may encounter in her travels. These interactions would be enabled via the software interfaces embedded all around her.

A personal digital fiduciary not only could interact with systems on Carla’s behalf, but also where necessary protect Carla by interrogating and even challenging other computational systems. As one example, a Personal AI programmed to operate under fiduciary duties of loyalty and care could do things like:

· Question attempts to analyze Carla’s personal data without her consent;

· Block unwarranted forms of surveillance and data extraction; and

· Thwart attempts to manipulate Carla’s activities in her physical environment.

Conclusion: Adopting Ancient Lessons for Modern Communities

“New technologies have not made place irrelevant in our lives or fundamentally altered our embeddedness in the physical world. If anything, they have made place more important. Despite our fractured subjectivity, our insistently networked existence, and our hybrid culture, the ground under our feet remains surprisingly important to us and desperately in need of our care…. Places matter!” — David Fleming, City of Rhetoric (at 32–33).

Against a backdrop of widespread governance failures worldwide in our economic, political, and social systems, the near-term opportunity is apparent. As was attempted at Thurii some 2500 years ago, today we can craft governance structures and spatial processes that work together to support inclusive physical environments for real people.

For our digital communities, the challenge is joining together two critical pieces: a trustworthy entity that is accountable to the people, and technologies that invite rather than curtail active civic participation. While to date we have no obvious success stories to tout on its behalf, the civic data trust may well become one such governance mechanism. Because places matter.

In our next, last article, we will summarize where Carla’s journey has taken us, and recognize the salience of digital lifestreams to bring it all together.


(For further information about the “new” DTPR, and to connect with the team, check out their just-unveiled website at

The Author gratefully acknowledges:

— Content, editing, and graphics contributions by Todd Kelsey

— Article series supported by Omidyar Network

[1] See generally

[2] Fleming, David. “The Streets of Thurii: Discourse, Democracy, and Design in the Classical Polis.” Rhetoric Society Quarterly, vol. 32, no. 3, 2002, pp. 5–32. JSTOR, Accessed 10 Oct. 2020.

[3] Fleming, Streets of Thurii, at 6.

[4] Fleming, Streets of Thurii, at 27.

[5] Richard Whitt, Through A Glass Darkly, at 147.

[6] Luciano Floridi has observed how marketing uses people as interfaces, to be exploited by commercial and political players for our data, our money, and our votes. Luciano Floridi, Marketing as Control of Human Interfaces and Its Political Exploitation, August 19, 2019.

[7] Kuang and Fabricant, User Friendly, at 32.

[8] Kuang and Fabricant, User Friendly, at 34.

[9] Fleming, Streets of Thurii, at 18.

[10] Fleming, Streets of Thurii, at 12.

[11] Galloway, The Interface Effect, at 90–94.

[12] Kuang and Fabricant, User Friendly, at 145.

[13] Mark Weiser, “The Computer for the 21st Century,” Scientific American, September 1991, 94.


[15] Artyushina, Is Civic Data Governance the Key to Democratic Smart Cities?, at 29.


[17] For two novel perspectives on the Sidewalk Labs saga in Toronto, see Anna Artyushina, Is Civic Data Governance the Key to Democratic Smart Cities? The Role of the Urban Data Trust in Sidewalk Toronto, Telematics and Informatics 2020, at 19–31,; Teresa Scassa, Designing Data Governance for Data Sharing: Lessons from Sidewalk Toronto, Technology and Regulation (2020).

[18] Scassa, Designing Data Governance, at 56.



Richard Whitt

Richard is a former Googler with a passion for making the open Web a more trustworthy and accountable place for human beings.