vault backup: 2022-10-02 19:29:40

This commit is contained in:
Jet Hughes 2022-10-02 19:29:40 +13:00
parent b66537988b
commit 912d16da3b

View File

@ -30,6 +30,8 @@ tags:
- https://www.theguardian.com/media/2016/nov/17/gab-alt-right-social-media-twitter - https://www.theguardian.com/media/2016/nov/17/gab-alt-right-social-media-twitter
- https://arxiv.org/pdf/1906.04261.pdf - https://arxiv.org/pdf/1906.04261.pdf
- https://arxiv.org/abs/1802.05287 - https://arxiv.org/abs/1802.05287
- https://www.reddithelp.com/hc/en-us/articles/204511829-What-is-karma-
- https://web.archive.org/web/20181031003944/https://www.bloomberg.com/news/articles/2018-10-30/gab-an-online-haven-for-white-supremacists-plots-its-future
# Intro # Intro
As members of society we must consider the ethical implications of our actions and inactions. This extends to our private lives, and our work. As members of the IT profession we have an obligation to adhere to certain standards. These have been collated and formalised into the ACM Code of Ethics. This Code "expresses the conscience of the profession", putting human well-being as the main focus. Although it provides a guidline for us to follow, it is not specific to everyone and should merely act as a foundation for individuals, businesses, and governments to build upon. As members of society we must consider the ethical implications of our actions and inactions. This extends to our private lives, and our work. As members of the IT profession we have an obligation to adhere to certain standards. These have been collated and formalised into the ACM Code of Ethics. This Code "expresses the conscience of the profession", putting human well-being as the main focus. Although it provides a guidline for us to follow, it is not specific to everyone and should merely act as a foundation for individuals, businesses, and governments to build upon.
@ -70,7 +72,7 @@ So what can we do - as IT professionals, to improve the situation of widespread
How can we apply this Code to the issue of misinformation, censorship, and freedom of expression. This is a very difficuly an nuanced topic. I personally think that freedom of speech is a very important part of our lives and that it should be held in the highest regard. However, it is not possible to grant entirely free speech to everyone while completely prohibiting misinformation. We must find a trade off between free speech and censorship of misinformation. As Mark Zuckerberg mention on episode 1863 on the Joe Rogan Experience podcast, when identifying which channels should be censored, we have to choose between more false positives or more false negatives. It takes a substantial effort to fact check and verify articles. For this reason, sites such as Facebook, make us of AI algorithms to flag potentially unwanted content REF. This has some inherent issues: Namely, how do these algorithms decide what is harmful? What data is it trained on? Who chose what to train the algorithms on? Why should they be the ones to decide what is right and wrong? In the interview with Joe Rogan, Zuckerberg stated that for important and controversial cases, posts are reviewed by third parties which analyse and fact check these articles. This is still not a perfect solution. The decision about what is censored is still made by a select group of people. Maybe this is the best option, I cant say for sure that is not. How can we apply this Code to the issue of misinformation, censorship, and freedom of expression. This is a very difficuly an nuanced topic. I personally think that freedom of speech is a very important part of our lives and that it should be held in the highest regard. However, it is not possible to grant entirely free speech to everyone while completely prohibiting misinformation. We must find a trade off between free speech and censorship of misinformation. As Mark Zuckerberg mention on episode 1863 on the Joe Rogan Experience podcast, when identifying which channels should be censored, we have to choose between more false positives or more false negatives. It takes a substantial effort to fact check and verify articles. For this reason, sites such as Facebook, make us of AI algorithms to flag potentially unwanted content REF. This has some inherent issues: Namely, how do these algorithms decide what is harmful? What data is it trained on? Who chose what to train the algorithms on? Why should they be the ones to decide what is right and wrong? In the interview with Joe Rogan, Zuckerberg stated that for important and controversial cases, posts are reviewed by third parties which analyse and fact check these articles. This is still not a perfect solution. The decision about what is censored is still made by a select group of people. Maybe this is the best option, I cant say for sure that is not.
When the Voices for Freedom group grew sufficiently large, Facebook made the decision to ban them. Subsequently, they switched to using Telegram messenger and the Gab social network as their main platforms. These sites value free speech and individual liberty above all else. These sites have become a haven for the far-right and other fringe groups, many of which have been banned from mainstream sites such as Facebook and TwitterREF When the Voices for Freedom group grew sufficiently large, Facebook made the decision to ban them. Subsequently, they switched to using Telegram messenger and the Gab social network as their main platforms. These sites value free speech and individual liberty above all else. These sites have become a haven for the far-right and other fringe groups, many of which have been banned from mainstream sites such as Facebook and TwitterREF. This is concerning because It is concentrating all the misinformation spreaders, trolls, and conspiracy theorists in one large echochamber.
Ideas Ideas
- a social media platform that had a cap on the number of followers a person can have, and the number of people a person can follow - a social media platform that had a cap on the number of followers a person can have, and the number of people a person can follow