The Fairness Doctrine was a 1949 policy that required holders of broadcast licenses (so TV and radio) to air contrasting views on controversial issues of public importance. It was upheld by the Supreme Court in 1969 but eventually was abolished in 1987 by the FCC commission under the influence of Ronald Reagan. An attempt to reinstate it by Democrat controlled congress was vetoed by Reagan. It was perceived that the doctrine was, in practice, anti-conservative.
Some have argued that this event was the beginning of the partisan broadcast media. Notwithstanding the difficulty of administering a requirement of fairness, I think that it is hard to argue that abandoning the principle even as an aspiration would have no effect on media culture. And Rush Limbaugh’s malign influence seemed to intensify and spread at exactly this time. He would not have been able to broadcast his deliberately provocative nonsense before 1987. Indeed, he got fired from his first radio job at KQV in 1974 for over-stepping the mark.
Was Fox news also a consequence? Some might argue not because as a cable network they are not considered a broadcaster and would not be subject to the doctrine. This however misses the point that if it had not been scrapped the doctrine could, and surely would, have been extended to other media channels; if only there were support for the principle.
Which brings me to Facebook.
The big problem with Facebook is not their market domination. It is not that they are displacing the mainstream media. It is not that they video cast murders once or twice per year. It is not that they censor free speech according to their own quixotic principles. It is not that they fail to remove hateful garbage but allow other nonsense.
The problem with Facebook is their central business model. This model is:
I will keep you on-line for as long as I can using any psychological tricks that our social scientist consultants and data scientists have shown will work. If I can do that then I can maximise the value of selling your sorry arse to our advertisers.
The psychological tricks, it turns out, mainly come down to telling people what they want to hear, making sure that they never hear a dissenting opinion and rewarding them with likes. The FB bubble feels pretty good for those who crave validation. It is the ultimate “safe space” though leftists usually mean something else when they use this term!
Personally, I make a special effort to subscribe to groups and individuals who I will likely disagree with. But unless I can get them to engage with me the FB algorithm will stop showing my posts to them or their post to me. Our entire society is part of a world-wide experiment that never required ethics approval and would certainly have been denied approval by any committee that I have served upon. The subjects are manipulated without being told how and their private data is sold to private companies. And there are almost 3 billion monthly active users. That’s a lot of lab-rats.
So, what is to be done, and how does this relate to the fairness doctrine?
I do not care that much about my private data being sold (so long as I can opt out) or being subjected to ads. Some of the ads are really well targeted. But all the offers from Russian brides are becoming irritating. (Does the algorithm target every 62 year old while male? My status does say I am married!) But I do care that my posts only reach those who have previously agreed with similar posts. And I care that it I rarely see news on my feed that challenges by opinions, even though I have tried to arrange this.
The fairness doctrine required media to “air opposing views on controversial issues”. While Facebook is not a media outlet, there is no reason to limit the principle to the rapidly declining media. It is the principle of diversity of opinion that is important.
The culprit is the algorithm and it cannot be allowed to continue. We cannot suffer this abomination (which one might even consider an emergent AI intelligence) to infect, brainwash and ultimately control every poor sod who is on-line, not with one ideology of course, but with a whole range of equally deranged ideologies. The Maga’s are brainwashed as are the Flat-earthers, the SJW’s, the BLM true believers, the White Supremacists, all with a barrage of alternative facts from alternative universes while that young 36 year old prick Zuckerberg in his skivvy pretends to apologise.
There might be only a few years left when government has the agency to stop this algorithm. Here is how it is done.
- You pass a bill requiring FB and all other social media platforms to reveal all the details of their algorithm to government regulators.
- You require the algorithm to be modified according to the Fairness Doctrine as explained below.
- You compulsorily close, break up, acquire or massively fine companies that show resistance.
The application of the Fairness Doctrine to the algorithm would be as follows. Facebook are easily able to classify people based on their networks. There is the left-right dimension but there are lots of other finer ones: the dimension for authoritarianism; individual versus community; materialism versus spiritualism; national versus global; race blind versus anti-racist.
You do not even have to give the dimensions names. Facebook is already using this kind of machine learning technology. That is how they target the ads and the posts that appear on your news feed. You are a data point in multi-dimensional space. So is every user and FB can see who you are close to and how.
The Fairness Doctrine would require Facebook to deliver contrasting views to your news feed. Maybe you have use 4 dimensions implying 16 polar opposites. You get a feed from the Guardian on refugees because you subscribe. I say that you cannot read another post in the same part of the space until you have read X other posts from contrasting points of view. The next item on your feed might be from the Telegraph on wind turbines killing birds. Then one on Islamic fundamentalism. Then one on Chinese soft power. Then a post from your weird Flat Earther FB friend. You get the idea.
Will users read these posts? They can be required to open them to see anything else new on their newsfeed. No more cat videos or photos of their friends until they look at the wind turbine story. The algorithm might require the page to remain open for at least 60 seconds. FB can monitor this with cookies. They cannot guarantee that you read it of course. Don’t bother sniping about practicalities. This is all possible when one considers what FB does now in the pursuit of brainwash induced profit.
Many people might leave FB for the dark web. But most current users (like you and me I suspect) who mainly post holiday snaps or communicate with friends and relatives about recent events or old school friends about reunions would not. Nor would those who want to use FB to be truly informed about current affairs or communicate their ideas about current affairs. I would really welcome the automatic delivery of diverse news, professional comments and FB friend comments onto my feed because trying to orchestrate this diversity myself is swimming against the tide of FB’s current algorithm.
So who would impose this? Which government would dare to declare war against FB? (No, not Scomo. He has already put us in the firing line once already).
I cannot see it happening in the US. There is too much suspicion of government. Any imposition of diversity would lead to further insurrections, though I hope it is clear from my explanation that I am taking about real diversity, not the bullshit diversity that we hear about on the Drum.
As much as the heavy regulatory hand of the EU worries me, I wonder if they might not be uniquely placed to put a stake in the sand and drive a stake through the heart of the appalling Frankenstein monster that Dr. Sweetmountain has created. (Yes, I over did the metaphors here, but hell I am on a roll). The EU is big enough for FB to comply and the politics would play out much better there than in the US.
I am not a twitter user so I do not know how this plays out on that and other platforms. But the principle is the key. Private companies have no right to create a widely used space where people are deliberately corralled into tiny islands of the political space, where they do not have bad ideas challenged, all for the trivial purpose of keeping them on that tiny island for venal profit.
Is it censorship? No. While it would be imposed by government, I hope it is clear that it is not government censorship. For a start, nothing is banned (over and above current limitations). Does it require judgement of what is fair and unbiased? No. It utilises an empirically revealed definition of diversity of news and comment. Users are forced to explore the existing space of ideas. Sounds good to me. Will it force users towards a view? No. It would automatically draw users towards the empirical middle of the FB universe while making them aware of other parts of the universe.
Interested in people’s thoughts. I have not heard this suggested elsewhere and there may be fatal weaknesses in the idea. But I think there is a growing consensus that social media cannot be allowed to continue in its present toxic form.
Thanks Chris,
A great post – not because I agree with it all, but because it came from a somewhat different perspective to my own and you made me see the simplicity of your position.
I’m certainly very sympathetic to your intentions. And the means you propose seem broadly OK – if unlikely to gain any immediate support from politicians or governments. Still one has to start somewhere.
Some thoughts:
1) When I think of the public interest as some counterweight to the private interest, I no longer think of government as it’s constituted. It’s already too compromised by private interest. Nor do I even think of bureaucracy.
I think of sortition – the process of allotment or random selection by which juries are constituted. (Note: one can do this among some qualified sub-set of people if one is after expertise – I’ll relegate an explanation to a footnote below).*
2) Pursuant to that, thought, I know someone quite senior in Facebook who looks after political propaganda and proposed the idea of citizens’ juries to steer moderation policy on Facebook and its various subsidiary social media platforms. They do have some mechanisms in there a bit like that – of committees of outsiders advising them, but I doubt they have much power. It seems to me to be a no-brainer for Facebook to hand over a fair bit of power to them because it legitimate what they do and takes themselves out of the picture to a substantial degree. It would most likely reduce profit, but then it makes the profit they do make safer.
3) I like your suggestion as to policy, so I’d give the citizens’ jury some role in supervising something like that.
4) My support for citizens’ juries here is not just because I think they offer profound opportunities to introduce a different logic of representation into our democracy. One their chief benefits – seen to be back through the ages to ancient Athens – is their relative immunity to corruption by the powerful. Trying to get this kind of thing through party politicians is very difficult, because party politicians operate in the netherworld between what can be sold to the electorate as a Good Thing and what the powerful will tolerate (or even better, fund!)
* I’d like to see most of the public sector boards replaced by allotted boards chosen from some qualified subset of the population. Thus for instance, as I wrote to someone a few days ago regarding my proposal for independent fiscal policy “the one thing I’d change in what I proposed 25 years ago is the nature of governance. I’d use sortition to randomly select people for the board of the new fiscal body from a much larger population of qualified experts. I’d also have a citizen’s council chosen from the population which could overrule the experts if a super-majority disagreed with it. Defenders of the existing system would be freaked out by this, but if you think about it, the art of the jury is to INTERNALISE the incentives. If ordinary people thought the experts were giving their best advice they’d probably agree – but I have no problem if they push back and eventually don’t agree. They’re not stupid.”
On citizen’s juries, I have previously expressed strong support for the idea. But specific to your point 4, France’s recent experiment with a citizen’s jury on climate change took me a bit by surprise by the way it was so easily railroaded by the “establishment”.
They were not given enough time to think, there was no attempt made to select for any specific competences so many of them were simply incapable of understanding what they were being told let alone critically assessing it, and they were basically spoon-fed their briefings by a completely “establishment” mix of people (I really do not have a better word for it), which was reflected in the staggering lack of imagination or perception in their recommendations.
:(
So whilst I am still a fan I think much more energy needs to go into careful design for particular cases, and some degree of specific competences, even if this starts to veer back towards elitism.
Indeed
If you’re a fan of sortition, you should be pleased that it’s starting to get subverted by power. That’s a sign that it’s starting to turn up in the landscape and that it has some subversive power of its own. Otherwise, why bother to subvert it? So as well as welcoming the fact that this logic is playing out, we have to work against that subversion.
It seems to me that we need to do so by developing a culture of sortition. Most people writing about this imagine some Archimedean point from which to protect sortition – as you have here. The problem is that the establishment occupies most such points.
That’s one reason why I am suggesting that sortition within classes of people judged to have the skills is an important way to interdict any career interest they might be given by the establishment.
I also think that those citizens who have participated in sortition exercises become a useful alumni of the process and that that experience shouldn’t be thrown away. I’d like to see some body that could become an advocate for and to some extent the custodian of the culture of sortition. It would be a group of people who’d experienced a citizens’ jury. There might be a body of people who were just chosen by sortition from that cadre and so were presumptively representative of the community in just the way the original citizens’ juries were.
However I’d also like to see the idea of merit recognised. Not by elections for office bearers and not by government appointing them, but by a process of endogenous merit selection such as the one used to select spokespeople for the South Australian citizens’ jury on nuclear waste.
Hi Nick,
Sortition would be a better source of people for moderation. I agree. Moderation is still needed to purge the FB-landscape of completely unacceptable views. But as I am sure you realised, my post is not about moderation at all because that requires judgement. My idea is to forcibly change the algorithm so it does not concentrate your feed. This is empirically driven and requires no human judgement.
Thanks Chris,
I appreciate the point as a matter of logic. But I think it leaves two things unaddressed which I was trying to address.
1) Such regulation will be neither introduced nor maintained without political support. So it seems to me that the legitimacy of the way in which the regulation is arrived at and governed is important. As I commented in my comment above, there is no Archimedean point from which it can be imposed. So in thinking about imposing it, we need to think about how to promote its legitimacy.
2) The rule will require ongoing governance. There will be any number of ways the regulated entity might want to avoid it – it might serve up views that formally comply with the requirement for opinion diversity but which are actually written as an ironic nod to the other side – as Antony’s “Friends, Romans, Countrymen” speech does in Shakespeare’s Julius Ceasar. (To take the issue du jour, you could publish an article opposing an independent inquiry into an alleged rape by a Cabinet minister, liberally sprinkled with quotes from Peter Dutton saying what a ‘gutsy’ press conference the Cabinet minister gave, which was in fact written to undermine the case.)
Being told “you must make Facebook less engaging for Australians”, I can imagine Facebook deciding that the simplest compliant change would be blocking their site in Australia. The alternative of spending time and money on a red queen race specifically in Australia would probably not appeal, especially if the expressed goal was (as here) to make Facebook less profitable.
I recommended in my post that the EU would be the most likely government to do this. They have fined big tech and imposed strict data privacy regulations. Big tech did not walk away.
The expressed goal is not to make FB less profitable per se. It is to make it less destructive. The effect on profit is a consequence.
OK. Understood. So I will make by presentation about imposing control on the FB algorithm to a citizen jury.
Lincoln rather unkindly said that South Carolina was too small for a republic, but too large for an asylum. But the colony of South Carolina did invent the random jury in 1730. Since then they’ve invented secession and Lindsey Graham.
Hi Chris,
an interesting and important issue where I share your basic concerns, though I see different ultimate drivers and thus also different things one can do.
An old insight of Adam Smith I am increasingly veering towards is that one of the great strengths of the US is that there is such diversity in its bubbles. Adam Smith was thinking of all the little religious cults in the US, which he saw as a seething mass of competing bubbles within which all sorts of blatant nonsense was fanatically held to be true. He juxtaposed that vibrancy of stupidity with Europe, which was dominated by a few big bubbles like the Catholic Church and some big protestant churches. He didnt think they were any less stupid, but bemoaned that there was less to choose from.
Adam Smith argued that the implicit competition between all the US bubbles kept them in check in the US: any cult that became too obviously crazy and idiotic eventually saw its adherents run away, killed, or die out (such as the Christians sects that advocated its adherents to have no sex and no children, or the ones that encouraged murder). He observed that the European bubbles had much less of that competitive pressure towards collective sanity in them. So he essentially saw the Americans as individually insane but collectively highly sane. Sanity as an emergent property of competing small insane groups (‘bubbles’).
So my worry with Facebook is less the many crazy bubbles it encourages. I dont expect sanity from bubbles and count on competition to work towards collective sanity. My much bigger problem is the market dominance of just a few internet firms. I hope and expect competition between companies and between countries to solve that one too (though probably via some form of nationalised internet that becomes a national public good offering a suite of standardised services). The challenge will be to have diverse bubbles inside that nationalised system.
I dont expect that naionalisation to happen first in the US (though not impossible. Some States might do something like that). It is already happening in China (where diversity of bubbles is a big casualty). Some other authoritarian places are following suite and I do expect several European countries to follow.
In the long run I am very optimistic: the value of diverse bubbles will be rediscovered because it is part of the central constellation of competing truths that share enough common interest, which is the essence of science, democracy, and freedom. The tricks towards that constellation get rediscovered because the constellation works. Monoculturalism does not work and that will be rediscovered too.
Hi Paul, I have cogitated upon your “diversity and competition between bubbles” proposition and cannot see how it works theoretically or recently in practice. The US just went through a bubble based political process which almost ended in revolution. Obviously one bubble is bad (China) and two bubbles is bad (the US) but 50 bubbles while much better are not a basis for functional democracy. Bubbles, by which I mean groups of people who almost exclusively talk to each other, are intrinsically bad.
Why do you think Flatearthism was nearly dead in 1980? There was basically one guy in California sending out a newsletter. Now there are millions of followers, all thanks to the www created bubble that they can join 24/7 at a single click, while they sip on their bourbon. I know one and had to unfriend him on FB, he was becoming so obnoxious.
“I don’t expect sanity from bubbles and count on competition to work towards collective sanity.” I am just staggered by this statement. Competition works best when … there is competition. In this case, the members of different bubbles hardly talk. Is your idea that people will hear about various bubbles they might join and the best sounding ones will prosper? So politics becomes a process of marketing entirely? Roll up roll up! They’re lying to you about the Apollo missions!
If Adam Smith was talking about loose alliances of people who still interacted with all members of the polity in good faith then competition of ideas can work. But that is not what we have with 21 century bubbles. They are no longer loose alliances, precisely because of the technology that concentrates people into bubbles.
Thirty years ago, insane theories could not survive because acolytes would go to a BBQ and 5 friends would look them in the eye and say “Mate, you’re losing the plot. Here’s why.” Now the same person has spoken to 100 activists who will arm him with misleading alternative facts, will have been unfriended on FB my most of his friends, and may not even go to BBQ’s of people from the other bubble.
Where do you see the recent evidence that competition is going to sort all this out?
Hi Chris,
thanks for cogitating on it. Yes, all kinds of craziness is currently blooming, some of which was nearly extinct, like flateartherism.
Adam Smith wasn’t talking about people from different realities being very tolerant of each other having to work through issues. So no ‘loose alliances’. He was essentially talking about people voting with their feet if things became too extreme in their own little bubble: they would run to other bubbles or be taken down by a coalition of other bubbles who all felt threatened.
In economics the idea is known as the Tiebout theorem and is often applied to regional policy-making, with individuals voting with their feet for the regions with the better policy mix. It is also the same idea underlying experimentation and market economics: lots of people trying things, totally convinced of their own little truth, with others able to spot some notion of success and be attracted to that. So what works, even if purely accidentally, gets magnified. What is too destructive gets abandoned. That process requires no discussion, no meetings of minds, only vague awareness of the success of others.
Crucial to realise is that this mechanism does not lead to “the truth”, or “consensus”, or some other notion of resolution. The experimentation just goes on and on, but as a system it remains fairly sane.
Where can you see this competition working? You see it in many areas, both in history past and right now. Think of European countries from 1400-1900 copying the experiments in other countries that seemed to work (like the idea of the nation state, separation of powers, universal primary education), with countries that did not copy these things falling behind and abandoned by its population (eg the Austro-Hungarian empire). There was great destruction in that seething mass of change, but also huge progress.
Think of how in the late 1980s communist Soviet leaders abandoning their philosophy not because they were convinced in debate but because they could see they were not delivering. Their population was running away. China and India copying Western-style regulation in the last 20 years were also following this logic because they could see those regulations work so well. Within India, this basic logic is extremely clear to see and the motor of economic development in the 1990-2015 period.
Closer in time, you see these forces at work when it comes to islamic fundamentalism being squashed by a coalition of other religions and interests that felt too threatened (so not via debate!). Think also of how the covid-mania is unraveling: Texas last week abandoned all its covid-policies because it has deduced from looking at other places that they were counter-productive. Countries in Europe are right now edging towards Scandinavian policies on covid because of the devastation of the alternative. If you want an example of voting with feet: the populations of New York, Great Brittain, and London each shedding about a million people in the last 12 months because its ‘truth’ was not working.
Bubbles in this way of seeing things are just experiments. One should not point to the nonsense within them but ask if we have too few bubbles and if they are really diverse enough.
Once you let go of this desire that we all agree and that a single truth ‘wins’ and rather adopt the desire that things work out well on average, lots of bubbles with crazy ideas in them stop being so threatening. Now, of course, this does not mean that some people should not try to reach a middle ground or that the bubbles should not recognise mutual interests, or that some bubble-truths are deemed too dangerous to allow to thrive in a country. That is part and parcel of the dynamic. But one ceases to judge on the basis of a clear truth. Rather it becomes about some notion of overall success.
I know Tiebout worked up some models on this stuff. It was wildly unrealistic but, hey it was in simple maths so it must be right.
Your argument is really just an argument that
1) people copy what they conceive to be working in others’ arrangements and
2) some things are sufficiently costly that they drive people away.
Let’s have theorems when we need them. Not when they’re silly.
But then I would say that wouldn’t I? ;)
Hi Nick,
the tiebout theorem is just a particular reflection of a much broader line of thinking, which is essentially about the value of a radical kind of diversity: diversity in firmly held beliefs, truth if you will.
I think there is a good chance we (you, me, Chris) might actually agree on this because it has nothing to do with opinions on any topic. It is about whole systems and how they evolve.
You yourself have made very similar arguments in your Polanyi stuff: that whilst each node has no idea of its function, the whole has a kind of intelligence that no node has. Its a very similar idea of local stupidity and aggregate wisdom.
My recent post on China was about the immense loss to a system that tries to be one bubble of truth and values (https://clubtroppo.lateraleconomics.com.au/2021/02/17/what-to-expect-during-a-cold-war-with-china/). A commentator called it ‘meta-unstable’ which is a great way of putting it. The desire for a single truth is often useful (and I personally certainly have that desire), but if a single truth emerges that is a disaster for a large system.
This stuff works at various levels though and that is where it gets interesting: you want some ‘radical diversity’ within a large system but also some generic mechanisms (to allow ‘good competition’ in Adam Smiths words). So its a balance between some joint institutions in a large system that look after some mutual interests and punishment of some behaviour, and allowing the various truths to meander where they will.
Note how I am separating the desire for a single truth (which is in many situations needed and useful) from the question of whether actually ending up with a single one is good or bad for a small or large system.
The argument even goes for science, including physics which has its competing models of reality (ie the models of fundamental particles and of huge objects). Their incompatability is a source of huge creativity among those wanting to get a single truth. But the reality is that so far those different truths have been a boon to physics. For different questions different ‘truths’ seem apt, even though they dont line up with truths that are useful elsewhere.
Yes, agreed
Was just objecting that Tiebout turning it into maths added anything. Of course he’s welcome to do so. Can’t do any harm. I just object to his bit of silly maths giving him naming rights.
That’s going to massively devalue facebook for most users if it is to work. I can’t see the 15 posts a day from my mum unless I also view 15N posts a day from random “other viewpoints”?
How does anyone distinguish me following my friend’s business from me following some random I bought one thing from to get their discount? Because if you have a “real people I actually know” exemption the algorithm has to have a way to decide when that exemption applies. The anti-competitive aspect I leave to economists.
Nope. The Guardian on “refugees are human beings” is not alternative to “wind turbines cause cancer”, it’s alternative to “refugees are vermin”, “Australia is a white country” and “we decide”. Making me read posts from the Australian Nasty* Party or Peter Dutton if I want to follow the Refugee Action Coalition is balance. Offensive, sure, but that’s what you get with forced balance and the false equivalence you give as examples.
We’ve also seen the very real consequences of “balance” with the climate catastrophe where at the mild end the 3% of scientists who deny climate change are given equal time and treatment to the 97% who accept it, and also more recently on this blog with Covid and the claim that “kill the weak” is a legitimate position that’s unfairly being denied a position in the debate so you’ll push that position here.
* trying to avoid banned words. Feel free to fix that if there are no banned words.
Also, the exact choice of “balancing” views is going to be very challenging.
On the one hand, your examples are (hopefully) deliberately contrasting sane with insane viewpoints to emphasise the current state of Facebook. But in practice if “the algorithm” insisted that for the last week balance meant pairing “Scummo defends anonymous cabinet minister accuse of rape” with obviously false claims like “Christian Porter is a pedophile and a rapist” and unfounded accusations like “David Littleproud anally raped a 15 year old” … Facebook might find themselves spending a lot of money on lawyers regardless of the actual facts.
I also suspect that no political party would be happy if “balance” meant anything. On the one hand “if you want to follow your local MP you have to read X times as many articles about their opponents” is a nice and reasonable version of balance. But your examples suggest it’s more likely to be “… you have to read X times as many articles about conspiracy theories, libels on their character, foreign fringe actors and completely unrelated nonsense chosen at random”. Which is probably defamation in Australian purely on the grounds that you’re equating an Australian public figure with insanity.
Hi Moz. You would not have to view 15 contrary posts. I said X posts but selected from these different viewpoints.
As for climate change, the damage done has been precisely because those who become sceptics were sucked into an echo chambers. I am personally very willing to read articles that say, for instance, climate change cannot really be avoided so we should be me thinking about mitigating the effects. Lumborg for instance. I do not agree with him in the end. But most Green voters will never had read a word he has written.
You seem very much against the concept of balance and seem to have a clear idea of what the correct views are. That is the problem. You don’t. Neither do I.
The bottom line question you have to ask yourself is which is better? The current algorithm that corrals you towards one viewpoint, or an algorithm which strongly shepherds you towards alternative views? I could quote Mill here but I am sure you know what the quote would be.
Details of whether you personal friends would be included is just nit-picking.
Chris, I read your ratio as being greater than one. So to view 15 posts from my mother I’d have to view 15 posts (or more) from “alternative views”. Either, as you imply, random posts heavily biased towards nonsense, or 15 posts specifically chosen by an AI to balance what my mum posts.
You seem to imagine that a “green voter” would choose not to read contrary positions, but in my experience that’s mistaken. In my case it’s defeinitely a false assumption on your part, because I’ve read Lomberg as well as other science-like disagreements. I’ve *paid* New Matilda and argued with geoff about nuclear power FFS. I’ve also read many non-scientific critiques and some outright fantasies.
It’s definitely not true that green voters are ignorant of the existence and broad strokes of the positions. The disagreement is largely about whether the gish gallop is worth while, with some people saying that every bit of nonsense must be closely examined and rebutted in detail, in public. A position which is never accorded even the slightest respect from the other side – there has been no discussion by the Liberals of the thousands of arguments against their policies. Why should the anti-catastrophe side have to argue in ways the pro-catastrophe side doesn’t?
I fear you’re mistaking disagreement about the merits of legislating algorithms with disagreement about the desirability. If you can bear to, check out Mike Masnick’s thoughts on automated content moderation. perhaps start here:
https://www.techdirt.com/articles/20191111/23032743367/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well.shtml
(I’m happy to eschew ad homenim, but I’m equally not going to sit here quietly while you sling excrement at me)
It’s an example of a hard problem in automated moderation. You seem to imagine that an AI can look at what my mum posts and either decide that it’s my mum so she doesn’t require balance; or select some proportion of “balancing” posts that I have to view for each one of hers.
I don’t know how either is possible, even for a much smaller social network than Facebook. You might be far more expert than I am in AI classification algorithms, but I’m guessing you’re the economist type featured on various Australian university websites. I suggest not going down the hole Malcolm Turnbull did by saying “Australian law overrides the laws of mathematics”. He got global coverage for that remark.
It would also be worth looking at the research on the human cost of human moderation before you get too excited about human moderators as a solution. Even small, selective social networks (Parler and Gab, for example) have problems with bots posting and that quickly overwhelms human moderators. In volume, but also in emotion – there’s only so much time any one person can spend deciding whether a given image has crossed the line and is definitely too gory, too young, or too pornographic, before they need a break and eventually, before they burn out. Facebook discards those people when they’re too traumatised to continue, and that’s a real cost to society, even if it’s not easily measured in money.
Sorry if anything I wrote seemed ad hominem. BTW: I am not an economist. I am a Data scientist. While I agree that my idea involves solving a hard problem, I do not think it is an insoluble problem. The results of the system would require human oversight to see if it was meeting its aims. Sortition, as Nick suggests, might be a good way to select these humans.
Chris, I work as a software engineer and when management say “I have a simple task” I habitually start with the extreme edge cases and work inwards to the actual requirement. We might all agree that Facebook causes problems, but Parler is an example of how we don’t all agree on what those problems are.
Excellent. We have two decisions to be made per item of content: is this news, and if so is the randomly selected next item a contrasting view*. The board will initially not be subject to appeal, not even via court cases or legislation, so they just have to spit out decisions on each matter and then Facebook will use those to retrain the AI.
But then… if my mum forwards an article, does that count as news? There’s a gradation there through to her saying “that Nick is making a fool of himself again”, referring to her local MP… which is obviously not news, I think we can both agree.
If you want random content Omegle might be an instructive case study.
I’m thinking you want a single board of 10-20 people for Australia, and they decide things like which random pools get used for which loose subjects when viewed by whom, then deal with the multitudinous complaints about those decisions and their effects. Even if they just have to decide whether Facebook really is showing random posts that’s a hard decision, and one that rapidly becomes political (see: fact checking politicians, the anti-conservative bias of media etc)
The harassment problem is also going to have to be addressed. At the user level having your public posts shown to random people will inevitably lead to judgemental comments and worse, (this will exacerbate an existing problem, not create it).
At the board level, those people should be publicly known since they’re public servants, but at the same time they are the public face of “you have to see stupid cat photos you can’t just search endlessly for nip slips” and not everyone is content with writing angry letters to the editor. Sortition makes the latter worse, BTW, not better. Now it’s Jamshed Random Brownperson being thrust into the limelight (or doxxed, should Andrew Bolt take offense at a decision. Andie Fox is an instructive example there)
“Chris, I read your ratio as being greater than one. So to view 15 posts from my mother I’d have to view 15 posts (or more) from “alternative views”.” You have to love a guy who loves his mother!
Can I assume that you accept the aim of having an algorithm that promotes diversity to your feed and that we are arguing about difficulties of algorithmic implementation? Not to minimise these, but I did not want to get into the details so much. Apologies for pejoratively calling this kind of think “nit picking” in my post, but I was signalling that readers should engage with the principle rather than practical implementation.
As for your Mum, it would depend on her page rank. The diversity requirement would be weighted against posts with high page ranks. If she has a 100,000 links per day then she is an activist and, I admit, that her posts to you will be potentially curtailed. How about you text her on Whatsapp?
Harassment, trolling and other issues are part of moderation and not the subject of my post.