mirror of
https://github.com/jackyzha0/quartz.git
synced 2025-12-27 14:54:05 -06:00
vault backup: 2022-10-10 12:12:47
This commit is contained in:
parent
124e9acf0e
commit
0c65d309eb
@ -11,3 +11,17 @@ tags:
|
||||
# 2
|
||||
|
||||
# 3
|
||||
|
||||
|
||||
# 4
|
||||
The term is used to indicate the reliability of a system. For example if a spam detector stopped 99.99% of spam emails it would be 5 nines secure.
|
||||
|
||||
# 5
|
||||
Virtualisation of systems deployed on cloud platforms ensures that each of the systems are segregated from each other. This means that if one of the systems is compromised, it is very unlikely that this will lead to another service virtualised on the same hardware also being compromised as a result.
|
||||
|
||||
This means that a flaw in another organisation system that is virtualised on the same hardware as yours, cannot be exploited in a way that will affect your system.
|
||||
|
||||
# 8
|
||||
One example of an ethical issue with machine learning is the use of machine learning algorithms for deepfakes. Deepfakes are essentially an advanced version of face swapping. They are very useful for things like adding deceased actors into films. However, there are many ethical issues with them. They can be utilised to produce fake videos of trusted leaders, such as politicians, they are designed to influence the publics opinion of them, or otherwise cause harm to society. DeepFaked videos are particularly damaging because although we have learned that text and images can be faked using applications like photoshop, the public generally places more trust in videos. This means that these videos are more likely to cause harm, as we are less wary of them. Furthermore, the mere existence of the ability to fake videos, means that the public can place less trust in videos of politicians etc, and means that these people may have to find other ways to ensure their content can be trusted.
|
||||
|
||||
So, although machine learning deepfakes can be used for good purposes, they are also being used for bad purposes. When developing tools to produce deepfakes, the developers should consider the ethical implications of their software - who might use their software, and for what purpose, and try to find out how to limit the harm that their software could cause.
|
||||
Loading…
Reference in New Issue
Block a user