Cennydd Bowles Cennydd Bowles

Twitter and the void beyond

Some thoughts on Twitter. Perhaps I’ve not much to add beyond what others have said, but I want to say it anyway. (This was, after all, the point of Twitter.)

Twitter has been the most significant digital space of my life. I met my wife there. I met my community there. I made lifelong friends and a few mediocre enemies. I loved it so much I worked there. It was a mixed bag: pockets of excellent people stunted by woeful morale and exhausting leadership flux. The IPO was, I think, the reward these workers deserved for making Twitter a success despite everything.

And now, it belongs to one of the worst owners I could think of.

Everything we say on Twitter is now raw material for the world’s richest man to squeeze profit from. Every tweet is validation for his free-speech absolutism and teenage trolling, a vote for the weird-nerd Muskian cult.

I’m not going to leave entirely: there’s too much emotional geography there. Those walls hold memories. But I certainly won’t be around as much. I don’t think I’m going to Mastodon: from here, it looks like a nightmare. I will have to be on LinkedIn more, because I have a niche consulting business to run amid a grinding recession, and capitalism forces us to constantly pursue our own debasement.

Beyond that, I expect I’ll be screaming into the void on this here website, and hoping others see it somehow. We’ve always needed better indie-web connective infrastructure (readers solved only a marginal use-case); that need just became more urgent.

More than anything, though, I’ll just be elsewhere. I need to excise the social media brainworms, to unlearn the habit of thinking in short-form and seeking validation from numbers in blue dots. I hope to bump into you on those muddier roads.

Read More
Cennydd Bowles Cennydd Bowles

Working for the ICO

Summer 2022 ground me into a fine paste and spread me thinly over some dry, tasteless psychological crackers, meaning I forgot to mention I started a new job. I’m now Principal Technology Adviser at the ICO, the UK’s privacy regulator, specialising in product design. The plan is to help the UK design community do better on privacy. Some promising stuff in the pipeline: watch this space.

It’s a part-time role, so I can still balance my academic duties and some light freelancing, assuming no conflicts of interest. At times it’s been a whiplash-inducing, Verdana-drenched challenge, but I’m starting to find my rhythm now, and hell: if you want impact on an entire sector, regulators offer huge opportunities. Hopefully I can make the most of them.

Read More
Cennydd Bowles Cennydd Bowles

Echoing some home truths

Alexa’s got more pushy of late. Seems like almost every query invites some hostile FYI upsell: ‘by the way…’, ‘did you know…’. Presumably some Seattle product manager is on the hook for steep usage targets and is having to deal with the insurmountable issue that voice UIs are notoriously undiscoverable.

So… has anyone else taken to swearing at it as way to influence the algorithm? I’m positive a company like Amazon is listening for abusive replies and coding them as strong negative feedback signals. (If they’re not, they’re really missing a trick.)

I’m generally sentimental about the prospect of treating machine partners well, at least in years to come (see Future Ethics, ch8 – I’m also writing an essay that’s warm on robot moral patiency for my AI Ethics module). But I don’t see we have any other forms of recourse or protest, other than throwing the things into a river.

My point, I guess: it might be okay to tell Alexa to fuck off, you know.

Read More
Cennydd Bowles Cennydd Bowles

New workshop: What Could Go Wrong?

Want to design more responsible, ethical products? If so, you need to understand how your decisions might harm others.

I’m writing a new workshop, What Could Go Wrong?, which critics have labelled ‘wow, a really good title for a workshop’. It’ll introduce anticipatory techniques from strategic foresight, practical ethics, and even science fiction to explore the unintended consequences of design. Attendees will learn new ways to anticipate ethical risks so they can stop them before they happen.

It’ll debut at UX London on 28 June. Grab a ticket: https://2022.uxlondon.com/schedule/day-one/

Read More
Cennydd Bowles Cennydd Bowles

The ethical risks of emotional mimicry

Picture of WALL-E anthropomorphic robot, made in Lego. It looks sad.

When might a robot or AI deserve ‘moral status’? In other words, how sophisticated would an AI have to get to, say, claim rights, or for us to have a moral duty to treat it well? Sci-fi writers love this question, of course, and it’s an ongoing research topic in AI ethics.

One view: we should base this decision on behaviour. If an AI acts like other beings – i.e. humans or animals – that already have moral status, maybe the AI deserves moral status too. So, does it (seem to) dislike and avoid pain? Does it (appear to) have preferences and intentions? Does it (pretend to) display emotions? Things like that might count.

I think some engineers and designers bristle at this idea. After all, we know mimicking this sort of thing isn’t theoretically too tough: we can imagine how we’d make a robot that seemed to flinch from pain, that lip-wobbled on demand, etc.

Nevertheless, this theory, known as ethical behaviourism, is still one some philosophers take seriously. In part that’s because, well… what other useful answers are there? We can’t see into other people’s minds, so can’t really know if they feel or suffer. And we can’t rely on physiology and biomechanics: it’s all silicon here, not nerves and brains. So what other options do we have, apart from observed behaviour?

And imagine if we ever got it wrong. If we made an AI that could suffer, without realising it – a false negative – we’d end up doing some awful things. So it seems reasonable to err on the side of caution.

Back to design. Designers love emotions. We try to engender them in humans (delight!), we talk about them ad nauseam (empathy!), and so we’re naturally tempted to employ them in our products and services. But I think emotional mimicry in tech – along with other forms of anthropomorphism – is risky, even dangerous. First, tech that fakes emotion can manipulate humans more effectively, meaning deceptive designs become even more powerful. 

Second, the idea of ethical behaviourism suggests that at some future point we might become so good at mimicry that we face all sorts of unintended moral and even legal consequences. A dystopia in which the Duolingo owl is so unhappy you skipped your vocab test that you could be prosecuted for cruelty. A chatbot so real we’re legitimately forced to worry whether it’s lonely. Is it even ethical to create something that can suffer? Have we, in effect, just spawned a million unwanted puppies?

Design is complicated enough already: I don’t think we want to sign up for that world in a hurry. I’d rather keep emotion out of it.

Read More
Cennydd Bowles Cennydd Bowles

What it is I actually do, actually

Been doing a lot of introspection / abyss-gazing about what it is I do these days, and how I communicate it to others. Here’s where I’ve ended up. I’d love to know if these scenarios resonate with others.

If you’re a product, design, or engineering leader, you might feel the rise of tech ethics has made life even more complicated.

Your employees and job candidates are asking tough questions about your company’s impact on the world. You’re learning it’s not enough to just mean well: you need expert guidance on what ethics and responsibility should mean for your team.

You’ve seen toxic tech firms make ethical mistakes that harm the entire sector. You won’t let that happen to your team. So you need to understand the unintended harms your work could do, and reduce risks before they happen.

Perhaps your C-suite has made bold pledges of a sustainable, ethical, responsible future. Sounds great, but now it’s on you to make it happen. Where do you start? Your company’s CSR or ESG initiatives seem somewhat distant; it’s not clear how technology teams on the ground can align with these efforts.

So here’s my new pitch, my new value proposition if you like. I help tech teams build more responsible, ethical products. And I do that through a mix of design, training, speaking, and consulting. I’ve finally rebuilt my website to round all this up neatly and tightly: cennydd.com.

I have some availability from May, so if you’re a product, design, or engineering leader and this strikes a chord, let’s chat. (If you’re not one of those people, please feel free to share with someone who is. Thanks!)

Read More
Cennydd Bowles Cennydd Bowles

No to masked FaceID

I told iOS 15.4 I don’t want it to recognise my face while I’m wearing a mask.

I trust  and their privacy-enhancing implementation of masked FaceID – using Secure Enclave, in this case – and I don’t love the often-tinfoilish surveillance capitalism rhetoric. However, I do think there are valid ethical reasons to reject facial recognition that bypasses masking.

The risk isn’t Apple leaking data: it’s us collectively accepting that machines should be able to recognise us from just half a face / our eyes / our gait / etc etc. That’s a premise worth challenging, at least. I won’t passively contribute to a norm that could draw us closer to real harm.

Read More
Cennydd Bowles Cennydd Bowles

On strike

Today should be my first day as an associate lecturer at The Manchester Metropolitan University, delivering my first session on design ethics to an apprentice group I’ve looked forward to meeting. Instead, I’m on strike.

The UCU union has called nationwide strikes over pay, workload, inequality, and casualisation. I’m not yet a member of the union, and as a practitioner working to an unpredictable schedule, casual contracts suit me better than regular commitments. Perhaps this doesn’t look like my fight.

But the minute I’m paid to prep, teach, and mark a module I become an educator and even, to my surprise, an academic. My loyalty has to be to my new colleagues. The cost of living is soaring, but our peers in academia find themselves undervalued, their opportunities squeezed, pensions slashed, and hostile policies devaluing their expertise and futures.

I’ve been absurdly fortunate to fall into a career that’s overpaid me and given me valuable knowledge. It means I have power. I want to engage with academia but I can do it on my terms: I don’t need the work, and to be candid the pay won’t make a difference to my life. I can choose my moments.

So I have to use that power wisely. I could easily, carelessly make things worse, swooping in and working when others won’t, further restricting opportunities for lecturers with better qualifications and a lifetime of pedagogical skills I don’t have. I can’t and won’t do that.

Of course, it’s a dreadful situation for students too. I feel for them. I hope they realise that academics are striking precisely because their working conditions don’t allow them to educate students properly. I should also point out the administrators on the MMU programme have been gracious and totally understanding. My quarrel isn’t with them.

Stepping into a unionised, precarious industry (academia) while also working in a non-unionised, in-demand industry (tech) has been a whiplash-inducing challenge. I’ve had to think deeply and quickly about the power I have and how I can use that to support my values. Showing solidarity and refusing to cross the picket line is my answer. Academics deserve better, as do students. Educating the next generation is perhaps the best way I can help my field, but I can’t lecture anyone about ethics without first standing up for the values I hold myself.

Read More
Cennydd Bowles Cennydd Bowles

If you think all design is manipulation, please stop designing

I posted a question today which took off: when does design become manipulation? I have thoughts of my own, and I’m giving some short talks on it soon, but I wanted to survey the wider community’s opinion.

The most common response by far: all design is manipulation. I found this response surprising, let’s say, so I pressed for a few explanations. Mostly, people told me it’s a natural feature of design, but that’s ok because manipulation is an ethically neutral concept.

To which I say: bullshit.

Manipulation is bad. And unless you’re ethically trained and can argue convincingly about minor philosophical exceptions, I say you know full well it’s bad. If you manipulate someone, you use them as means to your own ends; you undermine their consent and ability to exercise free choice; you withhold your true intent. People would describe you as self-centred, controlling, and deceptive. If your spouse asked how work went today, would you feel proud to reply ‘Well, I manipulated a bunch of people’?

The negative ethical connotation is obvious and well-accepted in general parlance. So I don’t buy this neutrality excuse: it shows a blasé acceptance of harm that’s unbecoming of a professional, and I’m alarmed so many designers seem to believe it.

Design influences. It persuades. But if it manipulates, something’s wrong. The difference isn’t just semantic; it’s moral. A manipulative designer abuses their power and strips people of their agency, reducing them to mere pawns. I see almost no circumstances in which that’s ethically acceptable.

So if you think all design is manipulation, please stop designing.

Picture credit: ZioDave

Read More
Cennydd Bowles Cennydd Bowles

Web3 and Lexit

(This deserves to be a longer post, but I don’t have the political knowledge to do it justice. So I’ll present it here as a throwaway thought and leave it to the theorists to tell me if I’m onto something.)

I can’t help seeing parallels between leftists embracing web3 and leftists who embraced Brexit (aka the ‘Lexit’ crowd). Sure, I can see the purist theoretical appeal, how there might be a better world ahead if certain steps unfold a certain way. But you’re also putting yourself on the same side as some dreadful people who hold antithetical visions for the future. If you can’t subsequently dispossess them of power – or at least compete for your vision to prevail instead – you’ve contributed to a dystopia.

In other words, ‘I’m into web3 because together we can topple the hegemony of Big Tech’ and ‘I’m into web3 because I can make shit-tons of untraceable, untaxable profit’ are going to come into direct conflict, and my money’s on the latter winning out, because money usually does.

I see this as a big problem for Universal Basic Income too. I’m deeply sympathetic to UBI, but I also recognise many libertarians also love it, but only because they see it as a way to scrap other welfare, means testing, etc.

It’s a high, high risk strategy to pursue the same means as people who want opposing ends.

Read More
Cennydd Bowles Cennydd Bowles

What writing is for

Here’s an idea I recognise and agree with: writing is networking for introverts. I’ve earned lasting connections and friendships – and plenty of work – thanks to something I wrote, Future Ethics in particular. And I’m always happier in conversations where each side has some idea of what interests the other. See also Leisa Reichelt’s insight that Twitter et al might lend us a sense of ‘ambient intimacy’ although wow, how long ago that hope seems now.

Another benefit of writing: it forces you to figure out what you actually think about something. Those mental plasma clouds have to coalesce into some sort of starlike objects; if you’re lucky, they sometimes form constellations others can see from afar.

Read More
Cennydd Bowles Cennydd Bowles

Technological trespassers

Header image of No Trespassing sign outside an old gasworks

In 2018, philosopher Nathan Ballantyne coined the term epistemic trespassers to describe people who ‘have competence or expertise to make good judgments in one field, but move to another field where they lack competence—and pass judgment nevertheless.’

It’s a great label (for non-philosophy readers, epistemology is the study of knowledge), and an archetype we all recognise. Ballantyne calls out Richard Dawkins and Neil deGrasse Tyson, scientists who now proffer questionable opinions on theology and philosophy; recently we could point to discredited plagiarist Johann Hari’s writing on mental health, which has caused near-audible tooth-grinding from domain experts.

This is certainly a curse that afflicts the public intellectual. There are always new books to sell, and while the TV networks never linger on a single topic, they sure love a trusted talking head. Experts can grumble on the sidelines all they like; it’s audience that counts.

(As an aside, I also wonder about education’s role in this phenomenon. As a former bright kid who perhaps mildly underachieved since, I’ve been thinking about how certain education systems – particularly private schools – flatter intelligent, privileged children into believing they will naturally excel in whatever they do. Advice intended to boost self-assurance and ambition can easily instil arrogance instead, creating men – they’re almost always men, aren’t they? – who are, in Ballantyne’s words, ‘out of their league but highly confident nonetheless’. I can identify, shall we say.)

Within and without borders

Epistemic trespass is rampant in tech. The MBA-toting VC’s brainwormish threads on The Future of Art; the prominent Flash programmer who decides he’s a UX designer now. Social media has created thousands of niche tech microcelebrities, many of whom carry their audiences and clout to new topics without hesitation.

Within tech itself, this maybe isn’t a major crime. Dabbling got many of us here in the first place, and a field in flux will always invent new topics and trends that need diverse perspectives. But by definition, trespass happens on someone else’s property; it’s common to see a sideways disciplinary leap that puts a well-known figure ahead of existing practitioners in the attention queue.

This is certainly inefficient: rather than spending years figuring out the field, you could learn it in months by reading the right material or being mentored by an expert. But many techies have a weird conflicted dissonance of claiming to hate inefficiency while insisting on solving any interesting problem from first principles. I think it’s an ingrained habit now, but if it’s restricted to purely technical domains I’m not overly worried.

Once they leave the safe haven of the technical, though, technologists need to be far more cautious. As our industry finally wises up to its impacts, we now need to learn that many neighbouring fields – politics, sociology, ethics – are also minefields. Bad opinions here aren’t just wasteful, but harmful. An uninformed but widely shared reckon on NFTs is annoying; an uninformed, widely shared reckon on vaccines or electoral rights is outright dangerous.

Epistemic humility

Ballantyne offers the conscientious trespasser two pieces of advice: 1. dial down the confidence, 2. gain the new expertise you need. In short, practice epistemic humility.

There’s a trap in point 2. It’s easy to confuse knowledge and skills, or to assume one will naturally engender the other in time. Software engineers, for example, develop critical thinking skills which are certainly useful elsewhere, but simply applying critical thinking alone in new areas, without foundational domain knowledge, easily leads to flawed conclusions. ‘Fake it until you make it’ is almost always ethically suspect, but it’s doubly irresponsible outside your comfort zone and in dangerous lands.

No one wants gatekeeping, or to be pestered to stay in their lane, and there are always boundary questions that span multiple disciplines. But let’s approach these cases with humility, and stop seeing ourselves as the first brave explorers on any undiscovered shore.

We should recognise that while we may be able to offer something useful, we’re also flawed actors, hampered by our own lack of knowledge. Let’s build opinions like sandcastles, with curiosity but no great attachment, realising the central argument we missed may just act as the looming wave. This means putting the insight of others ahead of our own, and declining work – or better, referring it to others who can do it to a higher standard – while we seek out the partnerships or training we need to build our own knowledge and skills.

Read More
Cennydd Bowles Cennydd Bowles

Scare-quote ethics

Forgive me, I need to sound off about “tech ethics”. Not the topic, though, but those fucking scare quotes: that ostentatious wink to the reader, made by someone who needs you to know they write the phrase with reluctance and heavy irony.

As you’ll see, this trend winds me up. I see it most often from a certain type of academic, particularly those with columns or some other visible presence/following. I love you folks, but can we cut this out? The insinuation – or sometimes the explicit argument – is “tech ethics” is meaningless; I have seen further and identified that the master’s tools will never dismantle the master’s house; these thinktanks are all funded by Facebook anyway; the issue is deeper and more structural.

As insights go, this is patently obvious. Of course the sorry state of modern tech has multi-layered causes, and of course our interventions need to address these various levels. Obviously there’s structural work to be done, not just tactical work.

But this is your classic ‘yes and’ situation, right? Pull every lever. Like, yes, I fully agree that the incentives of growth-era capitalism are the real problem. But we also need the tactical, immediate stuff that works within (while gently subverting?) existing limitations.

The problem with playing these outflanking cards, as we’ve seen from the web → UX → product → service → strategic design treadmill, is that as you demarcate wider and wider territory, your leverage ebbs away. You move from tangible change to trying to realign entire ecosystems. Genuinely, best of luck with it: it needs doing, but it takes decades, it takes power, and it takes politics. Most of those who try will fail.

I’m not equipped for that kind of work, so I do the work I am equipped for. Teaching engineers, students, and designers basic ethical techniques and thinking doesn’t solve the tech ecosystem’s problems. But I’ve seen it help in small, direct, meaningful ways. So I do it.

So please: spare us the scare quotes. Let’s recognise we’re on the same team, each doing the work we’re best positioned to do, intervening at the points in the system that we can actually affect, each doing what we can to help turn this ugly juggernaut around.

Read More
Cennydd Bowles Cennydd Bowles

Expo 2020

Back from a let’s-say-unusual few weeks in Dubai. I was meant to give a big talk there – dignitaries/excellencies etc, etiquette-expanding stuff – but contracted this dread virus instead and for a while entertained visions of wheezing my intubated, asthmatic last many hours away from home. Happily the vaccines did their job and while isolation was grim, my symptoms were entirely weedy. Nonetheless, now I’m recovered, I elected to head home ASAP for hopefully understandable reasons, so had to withdraw from the event.

I was able to squeeze in a quick visit to Expo 2020, however. It deserves caveats. Yes, it’s teeming with cheesy robots and sweeping popsci generalisations about AI. Yes, its primary function is soft power and reputation laundering, although the queues outside the Belarus pavilion were noticeably short. But I still found it interesting, even touching. There’s something compelling and tbh just cool about bringing the world together to talk about futures – and also to do it in a creative, artistic, architectural, and cultural way that engages the public.

Large water feature at Expo 2020

This is the kind of thing modern-era Britain finds deeply uncomfortable, I think. Excluding the flag-shagger fringe, national earnestness pokes uncomfortably against our forcefields, the barriers of cynicism we construct so we don’t have to look each other in the eye and confess our dreams. The only time fervent belonging ever really worked for us was 2012, and that was only thanks to home advantage.

But it has not escaped my attention that I’m an earnest dude and so, yeah: I enjoyed it. High-frequency grins behind the face mask, lots of mindless photos. Even the mediocre drone shows had some charm, although I drew the line at the ‘smart police station’.

Multicoloured Russian pavilion at Expo 2020

It was also a fascinating toe-dip into other cultures. I’m not likely to see musical performance from subregions of Pakistan, nor a Saudi mass-unison song – swords aloft, dramatic lighting and everything – in my everyday life. I suppose new experience is the point of travel anyway.

Osaka 2025 is a long way off temporally and spatially but, you know, I’m tempted.

Man in Arabic dress pushing a companion’s wheelchair through an artificial cloud of mist at Expo 2020
Read More
Cennydd Bowles Cennydd Bowles

Poking at Web3

Like everyone, I’ve been trying to understand the ideas of Web3. Not the mechanics so much as the value proposition, I suppose. Enough people I respect see something worthwhile there to pique my curiosity, and the ‘lol right-click’ critique is tiresome. So I’m poking at the edges.

Honestly, it’s heavy going. The community’s energy is weird and cultish, and the ingroup syntax – both technical and social – is arcane to the point of impenetrability: whatever else Web3 needs, it’s crying out for competent designers and writers.

Most of the art is not to my taste, shall we say. Some of it’s outright dreadful. That’s forgivable. The bigger problem, though, is the centrality of the wallet, the token, and so on. I’m avowedly hostile to crypto’s ecological impact and its inegalitarian, ancappish positioning. Crypto folks have promised change is right around the corner for a long time now – call me when it finally happens.

So… grave reservations. But that aside, there is something conceptually appealing there, right? Mutual flourishing, squads, communities weaving support networks that heal system blindspots. I feel those urges too. Perhaps I’m just a dreamy leftist / ageing Millennial-X cusper, though, but my current solution to this is simple: give people cash. (More on that later, but as an aside, if you’re lucky enough to have money, consider throwing some at people who are trying to carve out fairer, less exploitative tech too. It’s not a lucrative corner of the industry.)

Anyway, I’m still a Web3 sceptic, but the intentions… yeah, they’re pretty cool. If the community can become more accessible and phase out the ugly stuff (most obviously proof-of-work blockchains, but also this notion that transactions are the true cornerstone of mutuality), I’ll be officially curious.

Read More
Cennydd Bowles Cennydd Bowles

New role at the RCA

Starting as a (part-time) visiting lecturer at the Royal College of Art this week, teaching & mentoring MA Service Design students on ethical and responsible design. The next generation of designers have important work ahead, and I’m pleased to have the chance to influence them.

Read More
Cennydd Bowles Cennydd Bowles

The law isn’t enough: we need ethics

When I talk about ethical technology, I hear a common objection: isn’t the law enough? Why do we need ethics?

It’s an appealing argument. After all, every country tries to base their laws on some notion of good and bad, and uses legality as a kind of moral baseline. While there are always slippery interpretations and degrees of severity, law tries to distinguish acceptable behaviour from behaviour that demands punishment. At some point we decide some acts are too harmful to allow, so we make them illegal and set appropriate punishments.

Law has another apparent advantage over ethics: it’s codified. Businesses in particular like the certainty of published definitions. The language may be arcane, but legal specialists can translate and advise what’s allowed and what isn’t. By comparison, ethics seems vague and subjective (it’s not, but that’s another article). Surely clear goalposts are better? If we just do what’s legal, doesn’t that make ethics irrelevant, an unnecessary complication?

It’s an appealing argument that doesn’t work out. The law isn’t a good enough moral baseline: we need ethics too.

Problem 1: Some legal acts are immoral

Liberal nations tread lightly on personal and interpersonal choices that have only minor impacts on wider society. Adultery is usually legal, as are offensive slurs, so long as they’re not directed at an individual or likely to cause wider harm. The right to protest is protected, even if you’re marching in support of awful, immoral causes. Some choices might lead to civil liabilities, but generally these aren’t criminal acts. Some nations are less forgiving, of course – we’ll discuss that in Problem 3.

Even serious moral wrongs can be legal. In 2015, pharma executive Martin Shkreli hiked the price of Daraprim, a drug used to treat HIV patients, from $13.50 to $750 a pill. A dreadful piece of price gouging, but legal; if we don’t like it, capitalism’s advice is to switch to an alternative provider. (Shkreli was later convicted of an unrelated crime.)

Or imagine you witness a young child drowning in a paddling pool. You could easily save her but you choose not to, idly watching as the child dies. This is morally repugnant behaviour, but in the UK, unless you have a duty of care – as the child’s parent, teacher, or minder, say – you’re not legally obligated to rescue the child.

Essentially, if we use the law as our moral baseline, we accept any behaviour except the criminal. It’s entirely possible to behave legally but still be cruel, unreliable, and unjust. This is a ghastly way to live, and we should resist it strongly; if everyone lived by this maxim alone, our society would be a far less trustworthy and more brutal place.

Fortunately, there are other incentives to go beyond the legal minimum. It’s no fun hanging out with someone who doesn’t get their round in, let alone someone who defrauds their employer. Unethical people and companies find themselves distrusted and even ostracised no matter whether their actions are legal or not: violate ethical expectations and you’ll still face consequences, even if you don’t end up in court.

Problem 2: Some moral acts are illegal

On the flip side, some behaviour is ethically justified even though it’s against the law.

When Extinction Rebellion protestors stood trial for vandalising Shell’s London headquarters, the judge told the jury that the law was unambiguous: they must convict the defendants of criminal damage. Nevertheless, the jurors chose to ignore the law and acquitted the protestors.

Disobeying unjust laws is a cornerstone of civil disobedience, helping to draw attention to injustice and pushing for legal and social reforms. In the UK, smoking marijuana is still illegal, despite clear evidence that it doesn’t cause significant social ills. Although I don’t enjoy it myself, I certainly can’t criticise a weed smoker on moral grounds, and the nation’s widespread disregard of this law makes future legalisation look likely.

There’s also a moral case for breaking some laws out of necessity. A man who steals bread to feed his starving family is a criminal, but we surely can’t condemn his actions. Hiding an innocent friend from your government’s secret police may be a moral good, but the illegality puts you at risk too: if you’re unlucky, you might find yourself strapped to the waterboard instead.

Problem 3: Laws change across times and cultures

The list of moral-but-illegal acts grows if we step back in time. Legality isn’t a fixed concern: not long ago, it was legal to own slaves, to deny women the vote, and to profit from child labour.

Martin Luther King Jr’s claim that ‘the arc of the moral universe is long, but it bends toward justice’ gives us hope that we can right historical wrongs and craft laws that are closer to modern morality. But there are always setbacks. Look, for example, at central Europe today, where some right-wing populists are rolling back LGBTQ and abortion rights that most Western nations see as moral obligations.

If we equate the law and morality, aren’t we saying a change in the law must also represent a legitimate shift in moral attitudes? If a government reduces a speed limit, were we being immoral all those years we drove at 100kph rather than 80kph? Is chewing gum ethically wrong in Singapore but acceptable over the border in Malaysia? It can’t be right that a redrawing of legal boundaries is also a redrawing of ethical boundaries: there must be a distinction between the two.

There is, however, a trap here. Moral stances can vary across different times and cultures, but if we take that view to extremes, we succumb to moral relativism or subjectivism. These tell us that ethics is down to local or personal opinion, which leaves the conversation at a dead end. More on this in a future article, but for now I’ll point out that almost every culture agrees on certain rights and wrongs, and to make any progress we must accept some ethical stances are more compelling and defensible than others. Where moral attitudes vary, they still don’t move in lock-step with legal differences.

Problem 4: Invention outpaces the law

The final problem is particularly relevant for those of us who work in technology. Disruptive tech tends to emerge into a legal void. We can’t expect regulators to have anticipated every new innovation, each new device and use case, alongside all their unexpected social impacts. We can hope existing laws provide useful guidance anyway, but the tech sector is learning that new tech poses deep moral questions that simply aren’t covered by legal guidance. The advent of smart glasses alone will mean regulators will have to rethink plenty of privacy and IP law in the coming years.

We can and must push for better regulation of technology. That means helping lawmakers understand tech better, and bringing the public into the conversation too, so we’re not stuck in a technocratic vacuum. But that will take time, and can only ever reduce the legal ambiguity, not eliminate it. The gap between innovation and regulation is here to stay, meaning we’ll always need ethical stances of our own.

Read More
Cennydd Bowles Cennydd Bowles

Double positive: thoughts on an overflow aesthetic

[Tenuous thoughts about the last two Low albums and (post)digital aesthetics…]

I think Low’s Double Negative (2018) is a legit masterpiece, a shocking right-angle for a band in their fourth active decade. Probably my favourite album of the century so far.

To describe the album’s sound, I’d have to reach for a word like ‘disintegration’. The songs are corroded, like they’re washed in acid, or a block of sandstone crumbling apart to reveal the form underneath. The obvious forefather is Basinski’s Disintegration Loops, which uses an analogue technology (tape and playhead) to create slow sonic degradation.

Double Negative’s vocals aren’t spared this erosion: they’re tarnished and warped to the point of frequent illegibility:

Reviewers pointed out Double Negative is the perfect sonic fit for its age. Organic, foreboding, polluted: as a metaphor for the dread and looming collapse we felt in the deepest Trump years, it’s on fucking point.

Hey What, released this month, is no masterpiece. But it’s still a great album, and like Double Negative I feel it’s also suited to its time. While the music is still heavily distorted, Hey What’s distortion is tellingly different. Rather than the sound being eroded, pushed below its original envelope, Hey What’s distortions come from excess, from overflow.

The idea of too much sound/too much information is how fuzz and overdrive pedals work, but this overflow is distinctly digital, not analogue. It’s not just amps turned up to 11 – it’s acute digital clipping, a virtual mixing desk studded with red warning lights, and millions of spare electrons sloshing around. More double positive than double negative. And unlike its predecessor, Hey What spares its vocals from this treatment, letting them soar as Low vocals historically do:

Brian Eno famously said ‘Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature.’ So yes, artists were always going to mess around with digital distortion and overflow once digital recording & DAWs became mainstream. I hear some of this experimentation in hyperpop, say, while autotune is arguably in the same conceptual ballpark. Although I’m no expert in contemporary visual culture, it seems clear to me the overflow vibe also crops up in digital art, supported by the NFT crowd in particular.

‘Something is happening here’ isn’t itself an exciting thesis, but I’ve found it interesting to poke at the connotations and associations of overflow. While Double Negative is all dread and collapse, Hey What is tonally bright. The world may not have changed all that much in three years, but the sound is nevertheless that of a band that’s come to terms with dread and chooses to meet it head-on: an equal and opposite reaction.

Hey What is still messy, challenging, and ambivalent, but to me, its overflow aesthetic evokes post-scarcity, a future of digitised abundance, in which every variation is algorithmically exploited, but with the human voice always audible above the grey goo. It suggests, dare I say, that we could live in (a bit of) hope.

So I guess I’m wondering… are Low solarpunk now?

Read More
Cennydd Bowles Cennydd Bowles

Available for new projects

Wrapping up a major client, and a little time off, and at last have some capacity for future projects. Via Twitter thread, here’s a little reminder of what I do; please share with anyone who needs help making more responsible and ethical technology.

While I’m self-promoting, I’m told the World Usability Day (11 Nov) theme this year is trust, ethics, and integrity. I’m into that stuff. One advantage of the remote era is I can do multiple talks a day, so drop me a line if you need a keynote.

Oh, and I’m still interested in odd bit of of hands-on design, too. Turns out I’m still decent at it. Hit me up: cennydd@cennydd.com.

Read More
Cennydd Bowles Cennydd Bowles

Book review: Design for Safety

E7dnDjrWEAc2L_q.jpeg

Just sometimes, the responsible tech movement can be frustratingly myopic. Superintelligence and the addiction economy command the op-eds and documentaries while privacy and disinformation, important as they are, often seem captured by the field’s demagogic fringe. But there are other real and immediate threats we’ve overlooked. In Design for Safety, Eva PenzeyMoog pushes for user safety to be more prominent in the ethical tech conversation, pinpointing how technologies are exploited by abusers and how industry carelessness puts vulnerable users at risk.

The present tense is important here. The book’s sharpest observation, and the one that should sting readers the most, is that the damage is already happening. Anticipating potential harms is a large part of ethical tech practice: what could go wrong despite our best intentions? For PenzeyMoog, the issue isn’t conditional; she rightly points out abusers already track and harm victims using technology.

I’m very intentional about discussing that people will abuse our products rather than framing it in terms of what might happen. If abuse is possible, it’s only a matter of time until it happens. There is no might.

With each new technology, a new vector for domestic abuse and violence. We’re already familiar with the smart hauntings of IoT: abusers meddling with Nest thermostats or flicking on Hue lights, scaring and gaslighting victims. But the threat grows for newer forms of connected technology. Smart cars, cameras, and locks are doubly dangerous in the hands of abusers, who can infringe upon victims’ safety and privacy in their homes or even deny them a means to escape abuse.

While ethical tech books often lean closer to philosophy than practice, A Book Apart publishes works with a practical leaning. PenzeyMoog helpfully illustrates specific design tactics to reduce the risk of abuse, from increased friction for high-risk cases (an important tactic across much of responsible design), through offering proof of abuse through audit logging, to better protocols for joint account ownership: who gets custody of the algorithm after a separation?

Tactics like this need air cover. Given the industry’s blindspot for abuse, company leaders won’t sanction this extra work unless they understand its necessity. PenzeyMoog suggests public data is the most persuasive tool we have. It’s hard to argue against the alarming CDC stat that more than 1 in 3 women and more than 1 in 4 men in the US have experienced rape, physical violence, and/or stalking by an intimate partner.

Central to PenzeyMoog’s process is an admission that empathy has limits. While we should certainly try to anticipate how our decisions may cause harm, our efforts will always be limited by our perspectives:

‘We can’t pretend that our empathy is as good as having lived those experiences ourselves. Empathy is not a stand-in for representation.’

The book therefore tackles this gap head-on, describing how to conduct primary research with both advocates and survivors, adding valuable advice on handling this task with sensitivity while managing your own emotional reaction to challenging testimony.

Tech writers and publishers often seem reluctant to call out bad practice in print, but Design for Safety is unafraid to talk about what really matters. One highlight is a heartening, entirely justified excoriation of Ring. Amazon’s smart doorbell is a dream for curtain-twitchers and authoritarians, eroding personal consent and private space. PenzeyMoog argues one of Ring’s biggest defects is that it pushes the legal and ethical burden onto individual users:

‘Most buyers will reasonably assume that if this product is on the market, using it as the advertising suggests is within their legal rights.’

That legal status is itself far from clear: twelve US states require that all parties in a conversation consent to audio recording. But the moral issue is more important. By claiming the law is a suitable moral baseline, Ring pulls a common sleight of hand, but for obvious reasons (countries and states have different laws; morality and law change with time; many unethical acts are legal) this is sheer sophistry. Ring has deep ethical deficiencies: we mustn’t allow this questionable appeal to legality deflect from the product’s issues.

Design for Safety also takes a welcome and brave stance on the conundrum of individual vs. systemic change. It’s popular today to wave away individual action, arguing it can’t make a dent in entrenched systems; climate campaigners are familiar with the whataboutery that decries energy giants while ignoring the consumer demand that precipitates these companies’ (admittedly awful) emissions. Design for Safety makes no such faulty dismissals. PenzeyMoog skilfully ‘yes and’s the argument, agreeing that attack on any one front will always be limited, but contending that we should push tactical product changes while also trying to influence internal and industry-level attitudes and incentives.

‘We don’t need to choose between individual-level and system-level changes; we can do both at once. In fact, we need to do both at once.’

This is precisely the passionate but clear-headed thinking we need from ethical technologists, and it makes Design for Safety an important addition to the responsible design canon. If I have a criticism, it’s the author’s decision to overlook harassment and abuse that originates in technology itself (particularly social media). Instead, PenzeyMoog focuses just on real-world abuse that’s amplified by technology. Having seen Twitter’s woeful inaction over Gamergate from the inside, I know that abuse that emanates from anonymous, hostile users of tech can also damage lives and leave disfiguring scars. The author points out other books on the topic exist – true, but few are written as well and as incisively as this.

Design for Safety is a convincing, actionable, and necessary book that should establish user safety as a frontier of modern design. Technologists are running out of excuses to ignore it.

Buy Design for Safety here.

Ethics statement: I purchased the book through my company NowNext, and received no payment or other incentive for the review. I was previously a paid columnist of A List Apart, the partner publisher of A Book Apart. There are no affiliate links on this post.

Read More