mirror of
https://github.com/jackyzha0/quartz.git
synced 2025-12-24 05:14:06 -06:00
60 lines
1.6 KiB
Markdown
60 lines
1.6 KiB
Markdown
|
|
|
|
|
|
coded biases doco
|
|
# Ethics
|
|
## 1 Case studies
|
|
1. [[facial recognition in US riots 2021-01-06]]
|
|
2. [[Anti govt protest china]]
|
|
3. [[How is safe enough for autonomous vehicles]]
|
|
|
|
### 1.1 Differences 1 vs 2
|
|
Govt vs vigilante
|
|
|
|
my judgements contain additionl context
|
|
e.g., pro-democratic vs anti
|
|
|
|
world contains vast differences
|
|
how systems of laws work
|
|
extent of civil liberties afforded to individuals
|
|
|
|
### 1.2 Discussion
|
|
When developing a technology you dont know what is could be used for
|
|
|
|
|
|
|
|
## 2 Ethical handling of data
|
|
- Data moves very quickly due to computerised systems
|
|
- privacy act 2020
|
|
- its unethical to ignore potential security problems
|
|
- df
|
|
|
|
## 3 Ethical handling of bias and errors, e.g., in AI
|
|
- large datasets oftenb incdlude bias and errors
|
|
- to AI trained on these datasets with also be biased
|
|
- e.g., facial recognition trining overrepresenting white males
|
|
- ML algorithgms are often opqaue
|
|
- its not possible to understand how decisions are reached
|
|
- makes asessing suitability of AI for a use case difficult
|
|
- explainable AI
|
|
- attacks e.g., 'trapdoors' within ML training data
|
|
|
|
## 4 False or misleading claims
|
|
- pressure to release can lead to false claims
|
|
- are features fully tested
|
|
- need to assess risks of bias
|
|
- e.g., AWS uptime information
|
|
- rumoured that service status colour is n management decision
|
|
-
|
|
|
|
## 5 Your responsibility
|
|
- dont stay silent
|
|
|
|
## 6 Professional reponsibilities
|
|
- comp science per se lacks profressional standards
|
|
- there are some prefessional bodies which encoede responsibilities
|
|
- ACM coc
|
|
- IEEE coc
|
|
- neither are specific to NZ
|
|
- Within NZ must consider treaty obligations
|