Get Happy

Note to self: invest in a box of face masks featuring the sort of happy, smiley face modern employers like to see.

Canon Information Technology actually announced its “smile recognition” cameras last year as part of a suite of workplace management tools, but the technology doesn’t seem to have gotten much attention. Indeed, the fact it passed under the radar is a good illustration of just how common surveillance tools like this are becoming — and not just in China.

Although readers in the West sometimes have a tendency to dismiss the sort of surveillance described by the FT as a foreign phenomena, countries like the US and UK are just as culpable. […]

Such modern-day Taylorism is not restricted to blue collar jobs, either: many modern software suites like Microsoft 365 come with built-in surveillance tools. And with more people working from home because of the pandemic, more companies are deploying these features for fear of losing control over their workers. (Or, for a slightly more cynical read: they’ve always wanted to use these tools and the pandemic provides a handy pretext.)

One day Excel is going to demand that I flash it a smile that convinces it that I’m genuinely, unquestioningly happy before it agrees to do me the favour of recalculating the figures I’m pointing it to,1 at which point my time on this Earth will be done.

[Via Sentiers]

  1. To think, we used to imagine that the machines would win by pointing a ray gun at us and threatening us with extermination. Those were the days. 


Once upon a time this scam would have resulted in ministerial resignations/sackings:

Eight years ago the government had a plan so good it couldn’t tell you about it. It wanted to scrape everyone in England’s entire GP records and put them on one central database, where they would be anonymised – well, sort of! – then made available for research purposes to third parties, including private corporations. And called it[…] [Description of the fiasco/climbdown follows.]

And hey, the government learned its lesson. Which is to say that eight years on – literally right now – it’s doing the same thing, only in less time, without a public awareness campaign, with a trickier opt-out, and in the middle of a global pandemic. […]

The opt-out process described here is longer and fiddlier than you might hope for, but that’s mostly because the government has designed it to be complicated. That can’t really be helped, given that we’re dealing with this government who are utterly shameless about this stuff.

For the avoidance of doubt: data being shared to help medical research is, in principle, a good thing. Data being quietly handed over to commercial entities who can and almost certainly will hide behind ‘commercial confidentiality’ to obscure the efforts they’ve made to ‘unlock the pseudonymisation codes’ is not.

If you’re in England this potentially affects you. Go here for a step-by-step guide on how to opt out of this data giveaway. Also, go and read Marina Hyde’s article if you want to relish quality snark like…

Post its collapse, the plan was described by one statistics professor as “disastrously incompetent – both ethically and technically”. Which sounds like the sort of review Mary Berry would give on Bake Off to a roulade made entirely of human ears, but which arguably has even wider implications.

… in context.

This is fine

It’s almost as if the manufacturers of smart speakers want everyone to get used to accidental activations:

Voice assistants in smart speakers analyze every sound in their environment for their wake word, e.g., «Alexa» or «Hey Siri», before uploading the audio stream to the cloud. This supports users’ privacy by only capturing the necessary audio and not recording anything else. The sensitivity of the wake word detection tries to strike a balance between data protection and technical optimization, but can be tricked using similar words or sounds that result in an accidental trigger.

[Via Things That Have Caught My Attention s08e16]

Social credit – a different view

[Executive Summary: Facebook-style Social Media needs to die now, before it gets a chance to grow into this monstrosity.]

Given the distinctly Big Brother-flavoured response to the notion of China introducing a highly automated social credit system, an academic who has spent 16 months in China exploring attitudes to the idea of an automated social credit system gives us the perspective from folks who would be affected by the system once China finishes rolling it out:

During my time there, I found that positive perceptions of the social credit system among ordinary Chinese people were more prevalent than negative ones. Some welcomed the introduction of the shehui xinyong system while others were indifferent, and a significant number could see its benefits.

The thing is, western eyes looking at social credit and worrying about how badly it could turn out might indeed be looking at the system while lacking a Chinese cultural perspective on the reasons why the Chinese state standing in for the ancient Chinese concept of ‘tian’1, but that’s not the really important issue here.

First, what’s being reported here from China is a view formed before the social credit system is anywhere close to being rolled out. By the time it’s been fully operational for half a decade who knows whether the system will in fact have a reputation with the Chinese populace as being good at delivering judgement on the behaviour of the populace?

Second, given the extent to which the Chinese state leans on those who do not conform to the ideals of the state even using low-tech means, isn’t the real issue less that Westerners fail to understand tian and more that the prospect of the Chinese state turning into Facebook-on-steroids-and-run-by-a-government-free-of-pesky-independent-media-providing-information-about-all-those-messy-edge-cases might just be the start of our problems.

Imagine social credit being pointed to by future populist politicians in the West, keen to divert attention from the way they screwed up their response to COVID-19. That’s a very scary concept. Consider how forty years ago – back when many people didn’t even have a credit card – most people didn’t worry too much about their credit rating, and how far that situation has changed now. Who’s to say that politicians eager for the state to step back and for the populace to behave themselves won’t find the concept behind social credit very appealing one day? Nobody’s saying out loud right now that maybe the Chinese have the right idea, but that doesn’t mean that the same concepts couldn’t be repackaged and a decade down the line it’ll seem like common sense to build on what’s already out there?

[Via @cityofsound]

  1. The author describes Tian as an entity that “resembles the sky in that it is distant to the point that it has given up on the task of reconciling the human world with itself, but nevertheless knows about everyone’s deeds and thoughts.” 

Social Surveillance

Australia’s ABC News on China’s plans to leave no dark corner when it comes to monitoring the populace’s lives online:

Dandan doesn’t object to the prospect of life under the state’s all-seeing surveillance network. […] The 36-year-old knows social credit is not a perfect system but believes it’s the best way to manage a complex country with the world’s biggest population. […]
Under an existing financial credit scheme called Sesame Credit, Dandan has a very high score of 770 out of 800 – she is very much the loyal Chinese citizen.
Just for the record, I hated the visual style of this piece – way too much scrolling over images to get to the next piece of the text – but the content is decent. The really worrying prospect of this stuff is how easy a sell a European edition of the Social Credit software is going to be to a certain type of centrist politician.[note]OK, not in the UK marketplace, maybe, not under that label. But a quick rebranding exercise should take care of that and we’ll be off to the races.[/note]

The term ‘social credit’ should be as despised as the term ‘meritocracy’, properly understood, deserves to be, but give it a decade or so and – barring a backlash against the level of surveillance and the consequences for ordinary citizens having brought down the Chinese government in the meantime – you’re going to see a huge bunfight as every careerist with half an eye on Number 10 decides that this is exactly what the UK needs to get the populace to shape up. We can but hope that they get the successors of the Government Digital Service to try to implement a UK version, so that with any luck it’ll show up years late and barely functional and the world will have moved on to the next big idea.



Having caught up with Kashmir Hill‘s Gizmodo piece ‘People You May Know:’ A Controversial Facebook Feature’s 10-Year History, I’m both supremely glad that I’m not on Facebook [note]I was briefly a member a few years ago for as long as it took me to conclude that I didn’t need any of the services they were offering. Not that I’m making any claims to be particularly wise or virtuous or forward-looking: more that I’m markedly less sociable than they’d like their users to be.[/note] and creeped out by how little difference that makes to Facebook’s determination to shadow profile me whether I like it or not.

In other words, People You May Know is an invaluable product because it helps connect Facebook users, whether they want to be connected or not. It seems clear that for some users, People You May Know is a problem. It’s not a feature they want and not a feature they want to be part of. When the feature debuted in 2008, Facebook said that if you didn’t like it, you could “x” out the people who appeared there repeatedly and eventually it would disappear. (If you don’t see the feature on your own Facebook page, that may be the reason why.) But that wouldn’t stop you from continuing to be recommended to other users.

Facebook needs to give people a hard out for the feature, because scourging phone address books and email inboxes to connect you with other Facebook users, while welcome to some people, is offensive and harmful to others. Through its aggressive data-mining this huge corporation is gaining unwanted insight into our medical privacy, past heartaches, family dramas, sensitive work associations, and random one-time encounters.

[Via Pixel Envy]