mirror of
https://github.com/jackyzha0/quartz.git
synced 2025-12-24 13:24:05 -06:00
78 lines
3.5 KiB
Markdown
78 lines
3.5 KiB
Markdown
---
|
||
title: "22-trends-in-hci"
|
||
aliases:
|
||
tags:
|
||
- info203
|
||
- lecture
|
||
---
|
||
|
||
[slides](https://blackboard.otago.ac.nz/bbcswebdav/pid-2827522-dt-content-rid-18612267_1/courses/INFO203_S1DNIE_2022/2022/INFO203_Lecture23.pdf
|
||
how the the methodology of HCI used.)
|
||
|
||
Theory vs practice. There is a lot of work being done t improve the methodology
|
||
|
||
# Bad style - HARKing and the replication crisis
|
||
|
||

|
||
|
||
# Publication bias
|
||
- Computing research validate research claims using statistical significance as the standard of evidence
|
||
- Statistical evidence usually assumes 95% confidence (p <= 0.05)
|
||
- analysis of 362 papers published found that 97% reject the null
|
||
- papers that were incorrecct are not published
|
||
- *Publication bias*: Papers supporting their hypotheses are accepted for publication at a much higher rate than those that do not.
|
||
- HARKing (Hypothesizing After the Results are Known) or known as ‘outcome switching’
|
||
- Post-hoc reframing of experimental intentions to present a p-fished outcome as having been predicted from the start.
|
||
- p-hacking: Manipulation of experimental and analysis methods to produce statistically significant results.
|
||
- p-fishing: seeking statistically significant effects beyond the original hypothesis.
|
||
|
||
A survey of over 2000 psychology researchers, John et al. examined the prevalence of questionable experimental practices (forms of HARKing):
|
||
1. Failing to report all dependent measures, which opens the door for selective reporting of favourable findings – 63.4%;
|
||
2. Deciding to collect additional data after checking if the effects were significant – 55.9%;
|
||
3. Failing to report all of the study’s conditions – 27.7%;
|
||
4. Stopping data collection early once the significant effect is found – 15.6%;
|
||
5. Rounding off a p value (e.g., reporting p = .05 when the actual value is p = .054) – 22.0%;
|
||
6. Selectively reporting studies that worked – 45.8%;
|
||
7. Excluding data after looking at the impact of doing so – 38.2%;
|
||
8. Reporting an unexpected finding as having been predicted – 27.0%;
|
||
9. Reporting a lack of effect of demographic variables (e.g., gender) without checking – 3.0%;
|
||
10. Falsifying data – 0.6%
|
||
|
||

|
||
|
||
File drawer effect: Null findings tend to be unpublished and therefore hidden from the scientific community.
|
||
|
||
## Solutions
|
||
|
||
Preregistration: Registries in which researchers preregister their intentions, hypotheses, and methods (including sample sizes and precise plans for the data analyses) for upcoming experiments
|
||
|
||
Preregistered Reports: Papers (Reports) are submitted for review prior to conducting the experiment. Registered reports include the study motivation, related work, hypotheses, and detailed method; everything that might be expected in a traditional paper except for the results and their interpretation.
|
||
|
||
• Redefine or abdandon statistical significance. • Create data repositories and support replication
|
||
|
||
# Trends
|
||
## Wearable sensing and actuation
|
||

|
||
|
||

|
||

|
||
|
||
applications
|
||
- vr haptics
|
||
- on the skin directions
|
||
- notifications
|
||
- more
|
||
|
||

|
||
|
||
## electodermis
|
||

|
||
focused more on the manufacturing of the stickers
|
||
|
||
## earput
|
||
[earput paper](https://i.imgur.com/ZqfaHUt.png)
|
||

|
||
|
||
## skin track
|
||

|