diff --git a/content/Math/MOC.md b/content/Math/MOC.md index b22784085..8baa7ebe4 100644 --- a/content/Math/MOC.md +++ b/content/Math/MOC.md @@ -9,16 +9,16 @@ tags: ## Basic concept -* [Quantile](Math/Statistics/Basic/Quantile.md) +* [Quantile](math/Statistics/Basic/Quantile.md) # Discrete mathematics ## Set theory -* [Cantor Expansion](Math/discrete_mathematics/set_theory/cantor_expansion/cantor_expansion.md) +* [Cantor Expansion](math/discrete_mathematics/set_theory/cantor_expansion/cantor_expansion.md) # Optimization Problem -* [Quadratic Programming](Math/optimization_problem/Quadratic_Programming.md) \ No newline at end of file +* [Quadratic Programming](math/optimization_problem/Quadratic_Programming.md) \ No newline at end of file diff --git a/content/Math/real_analysis/cauchy_principal_value.md b/content/Math/real_analysis/cauchy_principal_value.md index 9c8e35e2c..cd9d32d1f 100644 --- a/content/Math/real_analysis/cauchy_principal_value.md +++ b/content/Math/real_analysis/cauchy_principal_value.md @@ -13,10 +13,10 @@ $$ -![](Math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648%201.png) +![](math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648%201.png) -![](Math/real_analysis/attachments/78DC2683DB0DF2EFEB6215DAB8C18C25.png) +![](math/real_analysis/attachments/78DC2683DB0DF2EFEB6215DAB8C18C25.png) the Cauchy principal value is the method for assigning values to *certain improper integrals* which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain. diff --git a/content/Photography/Aesthetic/Landscape/Landscape_MOC.md b/content/Photography/Aesthetic/Landscape/Landscape_MOC.md index 26552df7c..a98f99cd2 100644 --- a/content/Photography/Aesthetic/Landscape/Landscape_MOC.md +++ b/content/Photography/Aesthetic/Landscape/Landscape_MOC.md @@ -6,4 +6,4 @@ tags: - MOC --- -* [🌊Sea MOC](Photography/Aesthetic/Landscape/Sea/Sea_MOC.md) \ No newline at end of file +* [🌊Sea MOC](photography/Aesthetic/Landscape/Sea/Sea_MOC.md) \ No newline at end of file diff --git a/content/Photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md b/content/Photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md index 6e6f0fccc..314b32e6f 100644 --- a/content/Photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md +++ b/content/Photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md @@ -6,22 +6,22 @@ tags: - photography --- -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014349.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014349.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014354.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014354.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014401.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014401.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014613.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014613.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014622.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014622.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014634.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014634.png) # Reference diff --git a/content/Photography/Aesthetic/Landscape/Sea/Sea_MOC.md b/content/Photography/Aesthetic/Landscape/Sea/Sea_MOC.md index 5678f48d8..16ce5366a 100644 --- a/content/Photography/Aesthetic/Landscape/Sea/Sea_MOC.md +++ b/content/Photography/Aesthetic/Landscape/Sea/Sea_MOC.md @@ -7,5 +7,5 @@ tags: - aesthetic --- -* [Fujifilm Blue🌊, 小红书-Philips谢骏](Photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md) -* [豊島🏝, Instagram-shiifoncake](Photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md) \ No newline at end of file +* [Fujifilm Blue🌊, 小红书-Philips谢骏](photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md) +* [豊島🏝, Instagram-shiifoncake](photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md) \ No newline at end of file diff --git a/content/Photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md b/content/Photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md index 968724aeb..adefb235a 100644 --- a/content/Photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md +++ b/content/Photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md @@ -6,17 +6,17 @@ tags: - landscape - aesthetic --- -![](Photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg) +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg) -![](Photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n%20(1).jpg) +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n%20(1).jpg) -![](Photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg) +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg) -![](Photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n%20(1).jpg) +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n%20(1).jpg) -![](Photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg) +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg) -![](Photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg) +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg) # Reference diff --git a/content/Photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md b/content/Photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md index dd50f5e4d..7566691e5 100644 --- a/content/Photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md +++ b/content/Photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md @@ -6,4 +6,4 @@ tags: - MOC --- -* [🖼How to show Polaroid photo in a great way](Photography/Aesthetic/Polaroid/Polaroid_showcase.md) \ No newline at end of file +* [🖼How to show Polaroid photo in a great way](photography/Aesthetic/Polaroid/Polaroid_showcase.md) \ No newline at end of file diff --git a/content/Photography/Aesthetic/Polaroid/Polaroid_showcase.md b/content/Photography/Aesthetic/Polaroid/Polaroid_showcase.md index c19b8336c..f314724da 100644 --- a/content/Photography/Aesthetic/Polaroid/Polaroid_showcase.md +++ b/content/Photography/Aesthetic/Polaroid/Polaroid_showcase.md @@ -8,18 +8,18 @@ tags: -![](Photography/Aesthetic/Polaroid/attachments/IMG_5330.jpg) +![](photography/Aesthetic/Polaroid/attachments/IMG_5330.jpg) -![](Photography/Aesthetic/Polaroid/attachments/IMG_5329.jpg) +![](photography/Aesthetic/Polaroid/attachments/IMG_5329.jpg) -![](Photography/Aesthetic/Polaroid/attachments/IMG_5327.jpg) +![](photography/Aesthetic/Polaroid/attachments/IMG_5327.jpg) -![](Photography/Aesthetic/Polaroid/attachments/IMG_5334.jpg) +![](photography/Aesthetic/Polaroid/attachments/IMG_5334.jpg) Credits to [比扫描仪更easy的宝丽来翻拍解决方案 -BonBon的Pan](https://www.xiaohongshu.com/user/profile/6272c025000000002102353b/6331af53000000001701acfd) \ No newline at end of file diff --git a/content/Photography/Aesthetic/Portrait/Flower_and_Girl.md b/content/Photography/Aesthetic/Portrait/Flower_and_Girl.md index a3c29bbac..b19b53e00 100644 --- a/content/Photography/Aesthetic/Portrait/Flower_and_Girl.md +++ b/content/Photography/Aesthetic/Portrait/Flower_and_Girl.md @@ -9,45 +9,45 @@ tags: Credits to [Marta Bevacqua](https://www.martabevacquaphotography.com/), Thanks🌸 -![](Photography/Aesthetic/Portrait/attachments/14.jpg) +![](photography/Aesthetic/Portrait/attachments/14.jpg) -![](Photography/Aesthetic/Portrait/attachments/15.jpg) +![](photography/Aesthetic/Portrait/attachments/15.jpg) -![](Photography/Aesthetic/Portrait/attachments/16.jpg) +![](photography/Aesthetic/Portrait/attachments/16.jpg) -![](Photography/Aesthetic/Portrait/attachments/17.jpg) +![](photography/Aesthetic/Portrait/attachments/17.jpg) -![](Photography/Aesthetic/Portrait/attachments/18.jpg) +![](photography/Aesthetic/Portrait/attachments/18.jpg) -![](Photography/Aesthetic/Portrait/attachments/19.jpg) +![](photography/Aesthetic/Portrait/attachments/19.jpg) -![](Photography/Aesthetic/Portrait/attachments/20.jpg) +![](photography/Aesthetic/Portrait/attachments/20.jpg) -![](Photography/Aesthetic/Portrait/attachments/21.jpg) +![](photography/Aesthetic/Portrait/attachments/21.jpg) -![](Photography/Aesthetic/Portrait/attachments/22.jpg) +![](photography/Aesthetic/Portrait/attachments/22.jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(1).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(1).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(2).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(2).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(3).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(3).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(4).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(4).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(5).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(5).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(6).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(6).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(7).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(7).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(8).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(8).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(9).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(9).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(11).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(11).jpg) -![](Photography/Aesthetic/Portrait/attachments/content%20(12).jpg) +![](photography/Aesthetic/Portrait/attachments/content%20(12).jpg) -![](Photography/Aesthetic/Portrait/attachments/content.jpg) +![](photography/Aesthetic/Portrait/attachments/content.jpg) diff --git a/content/Photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md b/content/Photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md index 45ea7fb1a..2ac7f3e6a 100644 --- a/content/Photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md +++ b/content/Photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md @@ -14,22 +14,22 @@ Thanks Also, I see this in [摄影灵感|那有一点可爱 - by 小八怪](https://www.xiaohongshu.com/explore/63f0a27e0000000013002b05) -![](Photography/Aesthetic/Portrait/attachments/photo_4_2023-03-27_23-53-20.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_4_2023-03-27_23-53-20.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_5_2023-03-27_23-53-20.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_5_2023-03-27_23-53-20.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_6_2023-03-27_23-53-20.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_6_2023-03-27_23-53-20.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_7_2023-03-27_23-53-20.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_7_2023-03-27_23-53-20.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_8_2023-03-27_23-53-20.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_8_2023-03-27_23-53-20.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_9_2023-03-27_23-53-20.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_9_2023-03-27_23-53-20.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20%201.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20%201.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20%201.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20%201.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20%201.jpg) +![](photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20%201.jpg) -![](Photography/Aesthetic/Portrait/attachments/photo_2023-03-27_23-55-45.jpg) \ No newline at end of file +![](photography/Aesthetic/Portrait/attachments/photo_2023-03-27_23-55-45.jpg) \ No newline at end of file diff --git a/content/Photography/Aesthetic/Portrait/Portrait_MOC.md b/content/Photography/Aesthetic/Portrait/Portrait_MOC.md index 03fc68e91..b9b1a49c9 100644 --- a/content/Photography/Aesthetic/Portrait/Portrait_MOC.md +++ b/content/Photography/Aesthetic/Portrait/Portrait_MOC.md @@ -7,5 +7,5 @@ tags: - MOC --- -* [🌸Flower & Girl](Photography/Aesthetic/Portrait/Flower_and_Girl.md) -* [👧🇰🇷Cute Portrait from Korean MV ](Photography/Aesthetic/Portrait/From%20Korean%20MV%20Todays_Mod.md) +* [🌸Flower & Girl](photography/Aesthetic/Portrait/Flower_and_Girl.md) +* [👧🇰🇷Cute Portrait from Korean MV ](photography/Aesthetic/Portrait/From%20Korean%20MV%20Todays_Mod.md) diff --git a/content/Photography/Aesthetic/Style/Grainy_Green.md b/content/Photography/Aesthetic/Style/Grainy_Green.md index a8ef3b099..78cc39d6a 100644 --- a/content/Photography/Aesthetic/Style/Grainy_Green.md +++ b/content/Photography/Aesthetic/Style/Grainy_Green.md @@ -7,10 +7,10 @@ tags: - share --- -![](Photography/Aesthetic/Style/attachments/cinematicshine_326914596_601425291912114_4038822895364546166_n.jpg) +![](photography/Aesthetic/Style/attachments/cinematicshine_326914596_601425291912114_4038822895364546166_n.jpg) -![](Photography/Aesthetic/Style/attachments/cinematicshine_341207739_637183131584785_7839745357939483631_n.jpg) +![](photography/Aesthetic/Style/attachments/cinematicshine_341207739_637183131584785_7839745357939483631_n.jpg) # Reference diff --git a/content/Photography/Aesthetic/Style/Style_MOC.md b/content/Photography/Aesthetic/Style/Style_MOC.md index 33506ad28..7135416fe 100644 --- a/content/Photography/Aesthetic/Style/Style_MOC.md +++ b/content/Photography/Aesthetic/Style/Style_MOC.md @@ -7,5 +7,5 @@ tags: - MOC --- -* [🌅Warmth - Nguan](Photography/Aesthetic/Style/Warmth_by_Nguan.md) -* [📗 Grainy Green](Photography/Aesthetic/Style/Grainy_Green.md) +* [🌅Warmth - Nguan](photography/Aesthetic/Style/Warmth_by_Nguan.md) +* [📗 Grainy Green](photography/Aesthetic/Style/Grainy_Green.md) diff --git a/content/Photography/Aesthetic/Style/Warmth_by_Nguan.md b/content/Photography/Aesthetic/Style/Warmth_by_Nguan.md index ff4724f45..c2350f073 100644 --- a/content/Photography/Aesthetic/Style/Warmth_by_Nguan.md +++ b/content/Photography/Aesthetic/Style/Warmth_by_Nguan.md @@ -8,19 +8,19 @@ tags: Credits to [Nguan](https://www.instagram.com/_nguan_/) -![](Photography/Aesthetic/Style/attachments/167396766_118928406833773_7462235788758622009_n.jpg) +![](photography/Aesthetic/Style/attachments/167396766_118928406833773_7462235788758622009_n.jpg) -![](Photography/Aesthetic/Style/attachments/275801921_507726407459443_2779968335661218284_n.jpg) +![](photography/Aesthetic/Style/attachments/275801921_507726407459443_2779968335661218284_n.jpg) -![](Photography/Aesthetic/Style/attachments/275101252_116346090976633_4116581661408205933_n.jpg) +![](photography/Aesthetic/Style/attachments/275101252_116346090976633_4116581661408205933_n.jpg) -![](Photography/Aesthetic/Style/attachments/152391470_356387755409221_8144178651765781801_n.jpg) +![](photography/Aesthetic/Style/attachments/152391470_356387755409221_8144178651765781801_n.jpg) -![](Photography/Aesthetic/Style/attachments/153386473_426909131936316_8535520818773302544_n.jpg) +![](photography/Aesthetic/Style/attachments/153386473_426909131936316_8535520818773302544_n.jpg) -![](Photography/Aesthetic/Style/attachments/156216827_337435770999537_8250898900544979316_n.jpg) +![](photography/Aesthetic/Style/attachments/156216827_337435770999537_8250898900544979316_n.jpg) diff --git a/content/Photography/Basic/MTF_Curve.md b/content/Photography/Basic/MTF_Curve.md index 4292a8501..fc8896a0a 100644 --- a/content/Photography/Basic/MTF_Curve.md +++ b/content/Photography/Basic/MTF_Curve.md @@ -21,7 +21,7 @@ tags: # What is MTF Curve -调制传递函数 (MTF) 曲线是一种信息密集型指标(information-dense metric),反映了镜头如何*将对比度再现为空间频率(分辨率)的函数*。MTF Curve在一组设定好的基础参数下,提供一个composite view,关于光学像差([**optical aberrations**](Physics/Optical/optical_abberation.md))如何影响镜头性能。 +调制传递函数 (MTF) 曲线是一种信息密集型指标(information-dense metric),反映了镜头如何*将对比度再现为空间频率(分辨率)的函数*。MTF Curve在一组设定好的基础参数下,提供一个composite view,关于光学像差([**optical aberrations**](physics/Optical/optical_abberation.md))如何影响镜头性能。 通过MTF图,我们可以知道 @@ -41,11 +41,11 @@ tags: 大家应该知道,一个镜头的中心比边缘成像能力要好很多,因此只测试镜头的中心或边缘,是不能代表镜头的好坏的,所以厂家会从中心到边缘,选取多个点进行测试。如下图,尼康的全画幅机器,选取了距离中心5毫米,10mm,15mm,20mm的点测试。如果是APS-C画幅,因为感光元件小,会选取3mm,6mm,9mm,12mm等,不同厂家可能不一样。 -![](Photography/Basic/attachments/Pasted%20image%2020230424143258.png) +![](photography/Basic/attachments/Pasted%20image%2020230424143258.png) 测试方法一般使用白色背景、黑色直线 -![](Photography/Basic/attachments/Pasted%20image%2020230424143425.png) +![](photography/Basic/attachments/Pasted%20image%2020230424143425.png) * **粗线**用来测试**对比度**,粗度为 10 lines/mm * **细线**用来测试**分辨率**,粗度为 30 lines/mm @@ -53,11 +53,11 @@ tags: 下图的成像质量是越来越差: -![](Photography/Basic/attachments/Pasted%20image%2020230424143543.png) +![](photography/Basic/attachments/Pasted%20image%2020230424143543.png) # How to read MTF curve -![](Photography/Basic/attachments/Pasted%20image%2020230424143711.png) +![](photography/Basic/attachments/Pasted%20image%2020230424143711.png) 横坐标代表了到镜头中心的距离,纵坐标代表了对比度和分辨率的值。 @@ -67,7 +67,7 @@ tags: 蓝线是通过**细线**测试得到的,代表**分辨率**。 -![](Photography/Basic/attachments/Pasted%20image%2020230424143940.png) +![](photography/Basic/attachments/Pasted%20image%2020230424143940.png) 普通的镜头的曲线应该是下面这样的(红线代表对比度,蓝线代表分辨率),在中心点,镜头的对比度和分辨率最好,越往边缘越差。 @@ -77,11 +77,11 @@ tags: 有波浪就代表有像场弯曲,越大就越严重,实际情况一般问题不大。 -![](Photography/Basic/attachments/Pasted%20image%2020230424144046.png) +![](photography/Basic/attachments/Pasted%20image%2020230424144046.png) 最常见的MTF曲线如图: -![](Photography/Basic/attachments/Pasted%20image%2020230424144112.png) +![](photography/Basic/attachments/Pasted%20image%2020230424144112.png) 1. 红线,10lines/mm,也就是上面测试时说的粗线,用来测对比度的,从镜头中心到边缘,数值逐渐降低,表明镜头的对比度从镜头到边缘,逐渐降低。 2. 分辨率,从中心到边缘逐渐降低 diff --git a/content/Photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md b/content/Photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md index 9d0b37fe3..3fe6b0e03 100644 --- a/content/Photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md +++ b/content/Photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md @@ -9,4 +9,4 @@ tags: # Rollei -* [Rollei35](Photography/Cameras_Research/Pocket_film/Rollei_35.md) \ No newline at end of file +* [Rollei35](photography/Cameras_Research/Pocket_film/Rollei_35.md) \ No newline at end of file diff --git a/content/Photography/Cameras_Research/Polaroid/Polaroid.md b/content/Photography/Cameras_Research/Polaroid/Polaroid.md index b7dd64acc..24225cca6 100644 --- a/content/Photography/Cameras_Research/Polaroid/Polaroid.md +++ b/content/Photography/Cameras_Research/Polaroid/Polaroid.md @@ -9,7 +9,7 @@ tags: # Polaroid Background -![](Photography/Cameras_Research/Polaroid/attachments/Pasted%20image%2020230330195031.png) +![](photography/Cameras_Research/Polaroid/attachments/Pasted%20image%2020230330195031.png) Polaroid是一家成立于1937年的美国相机及照片制造公司,该公司曾经是即时相机市场的领导者。Polaroid公司在20世纪50年代推出了第一台即时相机,并在随后的几十年里不断推出各种型号的即时相机和胶片,成为了全球广泛使用的品牌。 @@ -21,5 +21,5 @@ Polaroid最著名的特点之一是它的“即时影像”技术,这种技术 # Polaroid Camera Review -* [Polaroid one600](Photography/Cameras_Research/Polaroid/Polaroid_one600.md) -* [Polaroid Integral 600 Series](Photography/Cameras_Research/Polaroid/Polaroid_600.md) +* [Polaroid one600](photography/Cameras_Research/Polaroid/Polaroid_one600.md) +* [Polaroid Integral 600 Series](photography/Cameras_Research/Polaroid/Polaroid_600.md) diff --git a/content/Photography/Cameras_Research/Polaroid/Polaroid_one600.md b/content/Photography/Cameras_Research/Polaroid/Polaroid_one600.md index d5da60176..21a0c4c11 100644 --- a/content/Photography/Cameras_Research/Polaroid/Polaroid_one600.md +++ b/content/Photography/Cameras_Research/Polaroid/Polaroid_one600.md @@ -8,7 +8,7 @@ tags: --- -![](Photography/Cameras_Research/Polaroid/attachments/Pasted%20image%2020230330195707.png) +![](photography/Cameras_Research/Polaroid/attachments/Pasted%20image%2020230330195707.png) # Specifications diff --git a/content/Photography/MoodBoard/Sea_20230428/Sea_20230428.md b/content/Photography/MoodBoard/Sea_20230428/Sea_20230428.md index 272b3b2b2..e04f967be 100644 --- a/content/Photography/MoodBoard/Sea_20230428/Sea_20230428.md +++ b/content/Photography/MoodBoard/Sea_20230428/Sea_20230428.md @@ -7,4 +7,4 @@ tags: --- -* [idea - reference image](Photography/MoodBoard/Sea_20230428/idea.md) +* [idea - reference image](photography/MoodBoard/Sea_20230428/idea.md) diff --git a/content/Photography/MoodBoard/Sea_20230428/idea.md b/content/Photography/MoodBoard/Sea_20230428/idea.md index d572a5e96..473b43f0b 100644 --- a/content/Photography/MoodBoard/Sea_20230428/idea.md +++ b/content/Photography/MoodBoard/Sea_20230428/idea.md @@ -6,40 +6,40 @@ tags: - idea --- -# [Fujifilm_Blue_by_小红书_Philips谢骏](Photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md) +# [Fujifilm_Blue_by_小红书_Philips谢骏](photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014349.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014349.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014354.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014354.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014401.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014401.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014613.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014613.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014622.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014622.png) -![](Photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014634.png) +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014634.png) -# [豊島_Instagram_shiifoncake](Photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md) +# [豊島_Instagram_shiifoncake](photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md) -![](Photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg) +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg) -![](Photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n%20(1).jpg) +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n%20(1).jpg) -![](Photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg) +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg) -![](Photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n%20(1).jpg) +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n%20(1).jpg) -![](Photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg) +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg) -![](Photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg) +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg) # [寄り道の理由。- Instagram, photono_gen](https://www.instagram.com/p/CrVPFjZvvlr/) -![](Photography/MoodBoard/Sea_20230428/attachments/photono_gen_336060179_2380745882102401_2427706248624984378_n.jpg) \ No newline at end of file +![](photography/MoodBoard/Sea_20230428/attachments/photono_gen_336060179_2380745882102401_2427706248624984378_n.jpg) \ No newline at end of file diff --git a/content/Photography/Photography_MOC.md b/content/Photography/Photography_MOC.md index a46558748..90af443ef 100644 --- a/content/Photography/Photography_MOC.md +++ b/content/Photography/Photography_MOC.md @@ -18,38 +18,38 @@ Also, here's my notes about learning photography ## About Basic Concepts: -* [Saturation](Photography/Basic/Saturation.md) +* [Saturation](photography/Basic/Saturation.md) ## Appreciation of other works - about ***aesthetic*** -* [👧Portrait](Photography/Aesthetic/Portrait/Portrait_MOC.md) -* [🏔Landscape](Photography/Aesthetic/Landscape/Landscape_MOC.md) -* [☝Style](Photography/Aesthetic/Style/Style_MOC.md) -* [✨Polaroid](Photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md) +* [👧Portrait](photography/Aesthetic/Portrait/Portrait_MOC.md) +* [🏔Landscape](photography/Aesthetic/Landscape/Landscape_MOC.md) +* [☝Style](photography/Aesthetic/Style/Style_MOC.md) +* [✨Polaroid](photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md) ## Camera Research -* [✨Polaroid](Photography/Cameras_Research/Polaroid/Polaroid.md) -* [📷Lens Structure](Photography/Cameras_Research/Lens_Structure/Lens_Structure_MOC.md) -* [📸Pocket film camera](Photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md) +* [✨Polaroid](photography/Cameras_Research/Polaroid/Polaroid.md) +* [📷Lens Structure](photography/Cameras_Research/Lens_Structure/Lens_Structure_MOC.md) +* [📸Pocket film camera](photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md) ## Skills I learned -* [How to measure light using Polaroid?](Photography/Skills/Polaroid_light.md) -* [How to use Moodboard](Photography/Skills/Moodboard.md) -* [How to show your Polaroid Picture](Photography/Aesthetic/Polaroid/Polaroid_showcase.md) +* [How to measure light using Polaroid?](photography/Skills/Polaroid_light.md) +* [How to use Moodboard](photography/Skills/Moodboard.md) +* [How to show your Polaroid Picture](photography/Aesthetic/Polaroid/Polaroid_showcase.md) ## Photography story -* [夜爬蛤蟆峰拍Polaroid慢门 - 2023.04.14](Photography/Story/Rainy_evening_hiking_Polaroid.md) +* [夜爬蛤蟆峰拍Polaroid慢门 - 2023.04.14](photography/Story/Rainy_evening_hiking_Polaroid.md) ## Mood Board -* [🌊Sea - 2023.04.28](Photography/MoodBoard/Sea_20230428/Sea_20230428.md) +* [🌊Sea - 2023.04.28](photography/MoodBoard/Sea_20230428/Sea_20230428.md) ## Meme -* [Photography meme](Photography/Photography_meme/Photography_meme.md) +* [Photography meme](photography/Photography_meme/Photography_meme.md) # Reference diff --git a/content/Photography/Photography_meme/Photography_meme.md b/content/Photography/Photography_meme/Photography_meme.md index 8b44827a7..8d404ba6c 100644 --- a/content/Photography/Photography_meme/Photography_meme.md +++ b/content/Photography/Photography_meme/Photography_meme.md @@ -7,4 +7,4 @@ tags: - happy --- -![](Photography/Photography_meme/attachments/QQ图片20230424193512.png) \ No newline at end of file +![](photography/Photography_meme/attachments/QQ图片20230424193512.png) \ No newline at end of file diff --git a/content/Photography/Skills/howToShowPolaroid.md b/content/Photography/Skills/howToShowPolaroid.md index 6edea8f3e..d5138a151 100644 --- a/content/Photography/Skills/howToShowPolaroid.md +++ b/content/Photography/Skills/howToShowPolaroid.md @@ -6,4 +6,4 @@ tags: - skill --- -* [宝丽来翻拍9宫格](Photography/Aesthetic/Polaroid/Polaroid_showcase.md) \ No newline at end of file +* [宝丽来翻拍9宫格](photography/Aesthetic/Polaroid/Polaroid_showcase.md) \ No newline at end of file diff --git a/content/Photography/Story/Rainy_evening_hiking_Polaroid.md b/content/Photography/Story/Rainy_evening_hiking_Polaroid.md index 9b2ac54b0..c6e87bd75 100644 --- a/content/Photography/Story/Rainy_evening_hiking_Polaroid.md +++ b/content/Photography/Story/Rainy_evening_hiking_Polaroid.md @@ -15,7 +15,7 @@ tags: 山底已经在小雨中颇有丁达尔现象的感觉。 -![](Photography/Story/attachments/9970714720C0835E6547C263418D551B.jpg) +![](photography/Story/attachments/9970714720C0835E6547C263418D551B.jpg) 雨让石头逐渐变得打滑,蛤蟆峰山顶的石头快的攀登会变得非常危险,这一点难以描述,或许你可以问你杭州本地的朋友。周潭在攀登最后一段路程之前摔倒,还好背包缓冲了几乎所有的冲撞,也让他意识到雨天来到这里的危险性,是具有极限运动的底色在的。 @@ -25,14 +25,14 @@ tags: 在蛤蟆峰顶拍慢门需要一定的三脚架架设技巧和测光技巧,在雨中就显得更加困难。 -![](Photography/Story/attachments/QQ视频20230416012046.mp4) +![](photography/Story/attachments/QQ视频20230416012046.mp4) -![](Photography/Story/attachments/FCB8B96468D3B459532E010E865D0B99.jpg) +![](photography/Story/attachments/FCB8B96468D3B459532E010E865D0B99.jpg) 经过测光和宝丽来app曝光调整,这次拍摄夜景的计划以$f/22$, 30s shutter speed, i-type film 640 ISO进行拍摄,先看成片效果: -![](Photography/Story/attachments/IMG_5553.jpg) +![](photography/Story/attachments/IMG_5553.jpg) 照片由iPhone 12 mini Polaroid app scanner扫描完成的film -> digital,效果比较一般,但我们能看出,曝光的效果不尽人意。这里的原因认为由以下原因导致: * 天气恶劣,空气湿度大,造成光线色散加重 @@ -41,11 +41,11 @@ tags: 同时,那晚还不懂now+ +键的使用导致相纸浪费一张,下面是now+中+键的用法: -![](Photography/Story/attachments/Pasted%20image%2020230416014050.png) +![](photography/Story/attachments/Pasted%20image%2020230416014050.png) 同时,那晚曝光时,有一次光圈不小心打到$f/33$,导致欠曝地更为厉害,其效果大概如下: -![](Photography/Story/attachments/IMG_5550.jpg) +![](photography/Story/attachments/IMG_5550.jpg) 同时要注意的是,Polaroid的曝光时间最多是30s,如果要更长时间的曝光,可以不弹相纸进行二次曝光,但是长曝光30s以上可能效果很差。 @@ -53,10 +53,10 @@ tags: 搞了两张人像,同样的曝光参数$f/22$, 30s shutter speed, i-type film 640 ISO,开了宝丽来闪关灯最大等级: -![](Photography/Story/attachments/IMG_5492.jpg) +![](photography/Story/attachments/IMG_5492.jpg) -![](Photography/Story/attachments/IMG_5493.jpg) +![](photography/Story/attachments/IMG_5493.jpg) 第一张人像清晰些,以我个人观点来看,是因为伞造成的反射 @@ -72,9 +72,9 @@ tags: 师傅当时前往了滨江,所以我们只好在山脚下,也就是保俶路的忠儿面馆那里等待,刚好周潭没有吃饱,起源巧合下,在这也算吃了一顿还算杭州特色的拌川。 -![](Photography/Story/attachments/A9A6699D1859851AB1D66131BD1382DC.jpg) +![](photography/Story/attachments/A9A6699D1859851AB1D66131BD1382DC.jpg) # Route -![](Photography/Story/attachments/QQ图片20230417203443.jpg) \ No newline at end of file +![](photography/Story/attachments/QQ图片20230417203443.jpg) \ No newline at end of file diff --git a/content/Physics/Electromagnetism/Basic/Electric_units.md b/content/Physics/Electromagnetism/Basic/Electric_units.md index 8555d796b..269b49feb 100644 --- a/content/Physics/Electromagnetism/Basic/Electric_units.md +++ b/content/Physics/Electromagnetism/Basic/Electric_units.md @@ -18,7 +18,7 @@ $$ * $X_L$ = inductive reactance * $X_C$ = capacitive reactance -![](Physics/Electromagnetism/Basic/attachments/Pasted%20image%2020230330163734.png) +![](physics/Electromagnetism/Basic/attachments/Pasted%20image%2020230330163734.png) **阻抗**是电路中电阻、电感、电容对交流电的阻碍作用的统称。阻抗是一个复数,实部称为**电阻**,虚部称为**电抗**;其中电容在电路中对交流电所起的阻碍作用称为**容抗**,电感在电路中对交流电所起的阻碍作用称为**感抗**,容抗和感抗合称为**电抗**。 diff --git a/content/Physics/Electromagnetism/Electromagnetism_MOC.md b/content/Physics/Electromagnetism/Electromagnetism_MOC.md index 81d7a5e27..a450f365c 100644 --- a/content/Physics/Electromagnetism/Electromagnetism_MOC.md +++ b/content/Physics/Electromagnetism/Electromagnetism_MOC.md @@ -8,12 +8,12 @@ tags: # Basic -* [Electric units](Physics/Electromagnetism/Basic/Electric_units.md) +* [Electric units](physics/Electromagnetism/Basic/Electric_units.md) ## Advanced -* [Maxwell's equation](Physics/Electromagnetism/Maxwells_equation.md) +* [Maxwell's equation](physics/Electromagnetism/Maxwells_equation.md) # Circuit -* [Resonant circuit](Physics/Electromagnetism/Resonant_circuit.md) \ No newline at end of file +* [Resonant circuit](physics/Electromagnetism/Resonant_circuit.md) \ No newline at end of file diff --git a/content/Physics/Electromagnetism/Maxwells_equation.md b/content/Physics/Electromagnetism/Maxwells_equation.md index 44afbad21..a5644f725 100644 --- a/content/Physics/Electromagnetism/Maxwells_equation.md +++ b/content/Physics/Electromagnetism/Maxwells_equation.md @@ -32,11 +32,11 @@ Essentially a vector field is what you get if you associate each point in space > [!note] > If you were to draw the vectors to scale, the longer ones end up just cluttering the whole thing, so it's common to basically lie a little and artificially shorten ones that are too long. Maybe using **color to give some vague sense of length**. -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230411151612.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411151612.png) ## Divergence -![](Physics/Electromagnetism/attachments/my-life.gif) +![](physics/Electromagnetism/attachments/my-life.gif) Divergence $\cdot$ Vector filed是来衡量在(x, y)点你产生fluid的能力 @@ -44,26 +44,26 @@ Divergence $\cdot$ Vector filed是来衡量在(x, y)点你产生fluid的能力 那些fluid流入的sink端,他们的Divergence $\cdot$ Vector filed就是negative的 -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230411155711.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411155711.png) 同时,如果点可以slow flow in变fast slow out,这个点位的divergence $\cdot$ vector filed也是positive的 -![](Physics/Electromagnetism/attachments/my-life%201.gif) +![](physics/Electromagnetism/attachments/my-life%201.gif) Vector field input point得到的是一个多维的输出,指向一个方向并带有scale;divergence $\cdot$ vector field,它的输出depends on the behavior of the field in small neighborhood around that point。输出为一个数值,衡量这个point acts as a source or a sink -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230411161346.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411161346.png) > [!note] > For actual fluid flow: $\text{div} F = 0$ everywhere ## Curl -![](Physics/Electromagnetism/attachments/output%202.gif) +![](physics/Electromagnetism/attachments/output%202.gif) Curl是衡量fluid在point被rotate的程度,clockwise方向是positive curl,counterclockwise是negative curl。 -![](Physics/Electromagnetism/attachments/curl.gif) +![](physics/Electromagnetism/attachments/curl.gif) 上图中这个点的curl也是非零的,因为fluid上快下慢,result in clockwise influence @@ -94,23 +94,23 @@ F_y = \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} $$ -![](Physics/Electromagnetism/attachments/calculation_result.gif) +![](physics/Electromagnetism/attachments/calculation_result.gif) ### Detail Explanation -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230412144351.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230412144351.png) -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230412144501.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230412144501.png) 在$(x_0, y_0)$微分一个很小的tiny step,会有一个新的vector,它与原有的vector会有一个difference。 -![](Physics/Electromagnetism/attachments/div.gif) +![](physics/Electromagnetism/attachments/div.gif) $\text{div} F(x_0, y_0)$其实就是corresponds to $360^\circ$方向的average的Step $\cdot$ Difference 可以想象一个source端,它朝四面发射vector,它的Step $\cdot$ Difference自然就是positive的 -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230412145732.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230412145732.png) 同理,不难想象的是,$\text{curl} F(x_0, y_0)$是corresponds to Step $\times$ Difference @@ -125,7 +125,7 @@ $$ $$ -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230411163735.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411163735.png) * $\rho$是charge density * $\epsilon_0$是Epsilon Naught,free space的介电常数,它决定free space空间中电场的强度 @@ -146,7 +146,7 @@ $$ \text{div} B = 0 $$ -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230411165048.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411165048.png) 磁场的divergence在任意地方为0,说明磁场的fluid是incompressible的,没有source也没有sinks,就像water一样。也有这样的interpretation,说明magnetic monopoles是不存在的 @@ -156,16 +156,16 @@ $$ \nabla \times E = - \frac{1}{c} \frac{\partial B}{\partial t} $$ -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230419141438.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419141438.png) -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230419141637.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419141637.png) ## Ampère's circuital law (with Maxwell's addition) $$ \nabla \times B = \frac{1}{c} (4\pi J + \frac{\partial E}{\partial t}) $$ -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230419141737.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419141737.png) # Maxwells equation explain EM wave @@ -184,7 +184,7 @@ $$ 电磁波的波段处于无法被肉眼观测的波段,直到Maxwells去世后,才被Hertz用实验证实了电磁波的存在。 -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230419155744.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419155744.png) # Reference diff --git a/content/Physics/Electromagnetism/Q_factor.md b/content/Physics/Electromagnetism/Q_factor.md index d99286605..b51950464 100644 --- a/content/Physics/Electromagnetism/Q_factor.md +++ b/content/Physics/Electromagnetism/Q_factor.md @@ -16,7 +16,7 @@ In physics and engineering, the quality factor or Q factor is a **dimensionless* -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230404144801.png)Fig. A damped oscillation. A low Q factor – about 5 here – means the oscillation dies out rapidly. +![](physics/Electromagnetism/attachments/Pasted%20image%2020230404144801.png)Fig. A damped oscillation. A low Q factor – about 5 here – means the oscillation dies out rapidly. Q因子较高的振子在共振时,在共振频率附近的**振幅较大**,但会产生的共振的**频率范围比较小**,此频率范围可以称为频宽。 @@ -31,7 +31,7 @@ Q因子较高的振子在共振时,在共振频率附近的**振幅较大**, # Definition -![](Physics/Electromagnetism/attachments/Pasted%20image%2020230404151254.png) +![](physics/Electromagnetism/attachments/Pasted%20image%2020230404151254.png) Fig. 一阻尼谐振子的频宽, $\Delta f$可以用频率和能量的图来表示。阻尼谐振子(或滤波器)的Q因子为$f_{0}/\Delta f$。Q因子越大,其波峰高度会越高,而其宽度会越窄 diff --git a/content/Physics/Electromagnetism/Resonant_circuit.md b/content/Physics/Electromagnetism/Resonant_circuit.md index 1eed506ef..c53eeec44 100644 --- a/content/Physics/Electromagnetism/Resonant_circuit.md +++ b/content/Physics/Electromagnetism/Resonant_circuit.md @@ -19,7 +19,7 @@ tags: ## *Resonant Frequency* -电容,电阻的[电抗](Physics/Electromagnetism/Basic/Electric_units.md#Electrical%20impedance)相同时发生谐振 +电容,电阻的[电抗](physics/Electromagnetism/Basic/Electric_units.md#Electrical%20impedance)相同时发生谐振 $$ |X_C| = |\frac{1}{j2\pi fC}| = |X_L| = |j2\pi fL| @@ -38,7 +38,7 @@ $$ * 阻抗最小,且为纯电阻,$Z = R+jXL-jXC = R$ -## **品质因子** ([*Q factor*](Physics/Electromagnetism/Q_factor.md)) +## **品质因子** ([*Q factor*](physics/Electromagnetism/Q_factor.md)) * 电感器或电容器在谐振时产生的电抗功率与电阻器消耗的平均功率之比,称为谐振时之品质因子。 diff --git a/content/Physics/Optical/optical_abberation.md b/content/Physics/Optical/optical_abberation.md index e9a0255f4..918a92783 100644 --- a/content/Physics/Optical/optical_abberation.md +++ b/content/Physics/Optical/optical_abberation.md @@ -14,7 +14,7 @@ tags: 要解释像差如何使图像模糊,首先要解释一下:什么是混淆圈? 当来自目标的光点到达镜头,然后会聚在传感器上时,它会很清晰。 否则,如果它在传感器之前或之后会聚,则传感器上的光分布会更广。 这可以在图 1 中看到,其中可以看到点光源会聚在传感器上,但随着传感器位置的变化,沿传感器散布的光量也会发生变化。 -![](Physics/Optical/attachments/Fig_1_Circles_of_confusion.gif) +![](physics/Optical/attachments/Fig_1_Circles_of_confusion.gif) 光线越分散,图像的焦点就越少。 除非光圈很小,否则图像中彼此距离较大的目标通常会使背景或前景失焦。 这是因为会聚在前景中的光与来自背景中较远目标的光会聚在不同的点。 @@ -25,7 +25,7 @@ tags: 彗形像差,又称彗星像差,此种像差的分布形状以类似于彗星的拖尾而得名。 -![](Physics/Optical/attachments/Pasted%20image%2020230424110844.png) +![](physics/Optical/attachments/Pasted%20image%2020230424110844.png) 这是一些透镜固有的或是光学设计造成的缺点,导致离开光轴的点光源,例如恒星,产生变形。特别是,彗形像差被定义为偏离入射光孔的放大变异。在折射或衍射的光学系统,特别是在宽光谱范围的影像中,彗形像差是波长的函数。 @@ -35,7 +35,7 @@ tags: 这可以在图 3 中看到,其中两个焦点由红色水平面和蓝色垂直面表示。 图像中的最佳清晰度点将在这两个点之间,其中任一平面的混淆圈都不太宽。 -![](Physics/Optical/attachments/Pasted%20image%2020230424111226.png) +![](physics/Optical/attachments/Pasted%20image%2020230424111226.png) 当光学器件未对准时,散光会导致图像的侧面和边缘失真。 它通常被描述为在查看图像中的线条时缺乏清晰度。 @@ -47,7 +47,7 @@ tags: 场曲是图像平面由于多个焦点而变得不平坦的结果。 -![](Physics/Optical/attachments/Pasted%20image%2020230424112159.png) +![](physics/Optical/attachments/Pasted%20image%2020230424112159.png) 相机镜头已在很大程度上纠正了这一点,但在许多镜头上可能会发现一些场曲。 一些传感器制造商实际上正在研究可以校正弯曲焦点区域的弯曲传感器。 这种设计将允许传感器校正像差,而不需要以这种精度生产昂贵的镜头设计。 通过实施这种类型的传感器,可以使用更便宜的镜头来产生高质量的结果。 这方面的真实例子可以在开普勒太空天文台看到,那里使用弯曲的传感器阵列来校正望远镜的大型球面光学元件。 @@ -59,7 +59,7 @@ tags: 具有桶形失真的图像的边缘和侧面远离中心弯曲。 这在视觉上看起来像是图像中有一个凸起,因为它捕获了弯曲视场 (FoV, field of view) 的外观。 例如,当在高层建筑的高处使用较低焦距的镜头(也称为广角镜头)时,可以捕捉到更宽的 FoV。 如图 5 所示,使用产生非常扭曲和宽 FoV 的鱼眼镜头时,这种情况最为夸张。在此图像中,网格线用于帮助说明失真效果如何在靠近侧面的地方向外产生更拉伸的图像, 边缘。 -![](Physics/Optical/attachments/Pasted%20image%2020230424113453.png) +![](physics/Optical/attachments/Pasted%20image%2020230424113453.png) ### Pincushion distortion (枕型畸变) @@ -68,7 +68,7 @@ tags: 这种形式的像差最常见于焦距较长的远摄镜头。 -![](Physics/Optical/attachments/Pasted%20image%2020230424113838.png) +![](physics/Optical/attachments/Pasted%20image%2020230424113838.png) ### Mustache distortion @@ -81,13 +81,13 @@ tags: 光的颜色代表特定波长的光。 由于折射,彩色图像将有多个波长进入镜头并聚焦在不同的点。 纵向或轴向色差是由不同波长聚焦在沿光轴的不同点引起的。 波长越短,其焦点将离镜头越近,而波长越远,则反之,离镜头越远,如图 8 所示。通过引入较小的孔径,进入的光仍可能聚焦在不同的位置 点,但“混淆圈”的宽度(直径)会小得多,导致不那么剧烈的模糊。 -![](Physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif) +![](physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif) ### Transverse / lateral aberration 导致不同波长沿图像平面分布的离轴光是横向或横向色差。 这会导致图像中主体边缘出现彩色边纹。 这比纵向色差更难校正。 -![](Physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif) +![](physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif) 它可以使用引入不同折射率的消色差双合透镜来固定。 通过将可见光谱的两端置于一个焦点上,可以消除色边。 对于横向和纵向色差,减小光圈的大小也有帮助。 此外,在高对比度环境(即具有非常亮的背景的图像)中不成像目标可能是有益的。 在显微镜中,镜头可能使用复消色差透镜 (APO) 而不是消色差透镜,消色差透镜使用三个透镜元件来校正入射光的所有波长。 当颜色最重要时,确保减轻色差将产生最佳效果。 diff --git a/content/Physics/Physics_MOC.md b/content/Physics/Physics_MOC.md index a29a9e36a..51b417525 100644 --- a/content/Physics/Physics_MOC.md +++ b/content/Physics/Physics_MOC.md @@ -7,4 +7,4 @@ tags: # Electromagnetism -* [Electromagnetism MOC](Physics/Electromagnetism/Electromagnetism_MOC.md) \ No newline at end of file +* [Electromagnetism MOC](physics/Electromagnetism/Electromagnetism_MOC.md) \ No newline at end of file diff --git a/content/Physics/Wave/Doppler_Effect.md b/content/Physics/Wave/Doppler_Effect.md index 218cb9993..02a8a186c 100644 --- a/content/Physics/Wave/Doppler_Effect.md +++ b/content/Physics/Wave/Doppler_Effect.md @@ -33,7 +33,7 @@ $$ ## Example -![](Physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif) +![](physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif) 其中$v_s = 0.7c$,波前开始在源的右侧(前面)聚集,并在源的左侧(后面)进一步分开。 diff --git a/content/Report/2023.04.16 天线测试.md b/content/Report/2023.04.16 天线测试.md index 3b7961157..2c7d8fdaa 100644 --- a/content/Report/2023.04.16 天线测试.md +++ b/content/Report/2023.04.16 天线测试.md @@ -3,7 +3,7 @@ # 背景 -![](Report/attachments/96251ac46494ab01294e570e352c426.png) +![](report/attachments/96251ac46494ab01294e570e352c426.png) # 测试结果 @@ -11,11 +11,11 @@ 前方30cm内无反射,超出本雷达测距能力极限,近似为无穷远距离内无反射,得到收集端电压 -![](Report/attachments/7983094eb03d1dcc285edf9c1768018.png) +![](report/attachments/7983094eb03d1dcc285edf9c1768018.png) 以前的天线收集的数据: -![](Report/attachments/f5d557933b15f8ea7f6861f70663d13.png) +![](report/attachments/f5d557933b15f8ea7f6861f70663d13.png) 问题在于两点: @@ -38,11 +38,11 @@ 新天线收集数据: -![](Report/attachments/abaec3368e16f2c9be67b5edbba39be.png) +![](report/attachments/abaec3368e16f2c9be67b5edbba39be.png) 旧天线收集信号: -![](Report/attachments/ac4c5aa53392835d3db04a78e73476b.png) +![](report/attachments/ac4c5aa53392835d3db04a78e73476b.png) 问题在于: diff --git a/content/atlas.md b/content/atlas.md index 814a92539..fbc55a03e 100644 --- a/content/atlas.md +++ b/content/atlas.md @@ -17,7 +17,7 @@ tags: * [Hardware](computer_sci/Hardware/Hardware_MOC.md) -* [Physics](Physics/Physics_MOC.md) +* [Physics](physics/Physics_MOC.md) * [Signal Processing](signal_processing/signal_processing_MOC.md) @@ -25,7 +25,7 @@ tags: * [About coding language design detail](computer_sci/coding_knowledge/coding_lang_MOC.md) -* [Math](Math/MOC.md) +* [Math](math/MOC.md) * [Computational Geometry](computer_sci/computational_geometry/MOC.md) @@ -41,7 +41,7 @@ tags: 🛶 Also, he learn some knowledge about his hobbies: -* [📷 Photography](Photography/Photography_MOC.md) +* [📷 Photography](photography/Photography_MOC.md) * [📮文学](文学/文学_MOC.md) diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md index b879383af..ea71c817d 100644 --- a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md +++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md @@ -12,7 +12,7 @@ Quantile loss用于衡量预测分布和目标分布之间的差异,特别适 # What is quantile -[Quantile](Math/Statistics/Basic/Quantile.md) +[Quantile](math/Statistics/Basic/Quantile.md) # What is a prediction interval diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md index da50be841..8e0b0c95f 100644 --- a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md +++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md @@ -10,7 +10,7 @@ XGBoost is an open-source software library that implements optimized distributed # What you need to know first -* [🚧🚧AdaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/AdaBoost.md) +* [🚧🚧AdaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/adaBoost.md) # What is XGBoost diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md index 834433205..5c0f61410 100644 --- a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md +++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md @@ -8,21 +8,21 @@ tags: # Attention is all you need -* [[computer_sci/deep_learning_and_machine_learning/deep_learning/⭐Attention|Attention Blocker]] -* [[computer_sci/deep_learning_and_machine_learning/deep_learning/Transformer|Transformer]] +* [[computer_sci/deep_learning_and_machine_learning/deep_learning/attention|Attention Blocker]] +* [[computer_sci/deep_learning_and_machine_learning/deep_learning/transformer|transformer]] # Tree-like architecture -* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/Decision_Tree.md) -* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/Random_Forest.md) -* [Deep Neural Decision Forests](computer_sci/deep_learning_and_machine_learning/deep_learning/Deep_Neural_Decision_Forests.md) +* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/decision_tree.md) +* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/random_forest.md) +* [Deep Neural Decision Forests](computer_sci/deep_learning_and_machine_learning/deep_learning/deep_neural_decision_forests.md) * [XGBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md) # Ensemble Learning -* [AdaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/AdaBoost.md) +* [adaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/adaBoost.md) * [XGBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md) diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526161419.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526161419.png new file mode 100644 index 000000000..76c404bdf Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526161419.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526161422.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526161422.png new file mode 100644 index 000000000..76c404bdf Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526161422.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526162035.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526162035.png new file mode 100644 index 000000000..16bffe91e Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526162035.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526162839.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526162839.png new file mode 100644 index 000000000..c3280de52 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526162839.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526163614.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526163614.png new file mode 100644 index 000000000..948d2f3f8 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526163614.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526164105.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526164105.png new file mode 100644 index 000000000..75965a471 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526164105.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526164106.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526164106.png new file mode 100644 index 000000000..75965a471 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230526164106.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130501.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130501.png new file mode 100644 index 000000000..a8cbbc247 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130501.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130509.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130509.png new file mode 100644 index 000000000..c126dc38b Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130509.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130856.png b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130856.png new file mode 100644 index 000000000..afb26188a Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted image 20230529130856.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/model_evaluation_MOC.md b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/model_evaluation_MOC.md new file mode 100644 index 000000000..97c7ae3e9 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/model_evaluation_MOC.md @@ -0,0 +1,8 @@ +--- +title: Model Evaluation - MOC +tags: +- deep-learning +- evaluation +--- + +* [Model Evaluation in Time Series Forecasting](computer_sci/deep_learning_and_machine_learning/Evaluation/time_series_forecasting.md) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/Evaluation/time_series_forecasting.md b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/time_series_forecasting.md new file mode 100644 index 000000000..2d3031ea7 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Evaluation/time_series_forecasting.md @@ -0,0 +1,121 @@ +--- +title: Model Evaluation in Time Series Forecasting +tags: +- deep-learning +- evaluation +- time-series-dealing +--- + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230526162839.png) + +# Some famous time series scoring technics + +1. **MAE, RMSE and AIC** +2. **Mean Forecast Accuracy** +3. **Warning: The time series model EVALUATION TRAP!** +4. **RdR Score Benchmark** + +## MAE, RMSE, AIC + +MAE means **Mean Absolute Error (MAE)** and RMSE means **Root Mean Squared Error (RMSE)**. + +这是两个衡量 continuous variables的accuracy的著名指标,MAE在以前的文章中被时常使用,16年的观察已经发现RMSE或者其他version的R-squared逐渐被使用起来 + +*我们需要了解何时使用哪种指标会更好* + +### MAE + +$$ +\text{MAE} = \frac{1}{n}\sum_{j=1}^n |y_j - \hat{y}_j| +$$ +MAE的特点在于所有individual difference有着equal weight + +如果将绝对值去掉,MAE会变成**Mean Bias Error (MBE)**,使用MBE时,要注意正反bias相互抵消 + +### RMSE + +$$ +\text{RMSE} = \sqrt{\frac{1}{n} \sum_{j=1}^n (y_j - \hat{y}_j)^2} +$$ + +均方根误差(RMSE)是一种二次评分规则,它还测量误差的平均幅度。它是预测值和实际观测值之间差异的平方的平均值的平方根。 + +### AIC + +$$ +\text{AIC} = 2k - 2\ln{(\hat{L})} +$$ +$k$是模型参数的估计,$\hat{L}$是模型似然函数(likelihood function)的最大化值 + +**Akaike information criterion**,赤池信息准则(AIC)是一个有助于比较模型的指标,因为它同时考虑了模型对数据的拟合程度和模型的复杂性。 + +AIC衡量信息的损失并**对模型的复杂性进行惩罚**。它是*参数数量惩罚后的负对数似然函数*。AIC的主要思想是模型参数越少越好。**AIC允许您测试模型在不过拟合数据集的情况下拟合数据的程度** + +### Comparison + +#### Similarities between MAE and RMSE + +均方误差(MAE)和均方根误差(RMSE)都以感兴趣变量的单位来表示平均模型预测误差。这两个指标都可以在0到∞的范围内变化,并且对误差的方向不敏感。它们是负向评分指标,也就是说数值越低越好。 + +#### Differences between MAE and RMSE + +*由于误差在求平均之前被平方,RMSE对大误差给予相对较高的权重*。这意味着在特别不希望出现大误差的情况下,RMSE应该更有用;而在MAE的平均值中,这些大误差将被稀释, + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230526161422.png) + +AIC the lower is better,但没有perfect score,只能用来相同dataset下不同model的性能 + +## Mean Forecast Accuracy + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230526162035.png) + +计算每个点的Forecast Accuracy,然后求平均,得到 Mean Forecast Accuracy + +Mean Forecast Accuracy的重大缺陷在大的偏离值造成巨大的负面影响,比如$1 - \frac{|\hat{y}_j - y_j|}{y_j} = 1 - \frac{250-25}{25} = -800\%$ + +解决方案是将Forecast Accuracy的最小值限制为0%,同时可以使用Median代替Mean。 + +一般来说,**当你的误差分布偏斜时,你应该使用 Median 而不是 Mean**。 在某些情况下,Mean Forecast Accuray也可能毫无意义。 如果你还记得你的统计数据; 变异系数 (**coefficient of variation**, CV) 表示标准偏差与平均值的比率($\text{CV} = (\text{Standard Deviation}/\text{Mean} * 100)$)。 大 CV 值意味着大变异性,这也意味着围绕均值的离差程度更大。 **例如,我们可以将 CV 高于 0.7 的任何事物视为高度可变且不可真正预测的。 另外,还可以说明你的预测模型预测能力很不稳定!** + +## RdR Score Benchmark (这是一个具有实验性的指标,blogger指出这个指标并没有在research paper出现过) + +RdR metric stands for: +* *R*: **Naïve Random Walk** +* *d*: **Dynamic Time Warping** +* *R*: **Root Mean Squared Error** + +### DTW to deal with shape similarity + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230526163614.png) + +RMSE、MAE这些指标都没有考虑到一个重要的标准:**THE SHAPE SIMILARITY** + +RdR Score Benchmark使用 [**Dynamic Time Warping(DTW,动态时间调整)** ](computer_sci/deep_learning_and_machine_learning/Trick/DTW.md)作为shape similarity的指标 + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230526164106.png) +欧氏距离在时间序列之间可能是一个不好的选择,因为时间轴上存在扭曲的情况。 + +* DTW:通过“同步”/“对齐”时间轴上的不同信号,找到两个时间序列之间的最佳(最小距离)扭曲路径 + +### RdR score means + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230529130501.png) + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230529130509.png) + +*RdR score*通过RMSE和DTW distance来计算,用于比较你的model和Radnom Walk(*Random Walk的RdR score = 0*)相比的优越性 + +### RdR calculation details + +可以通过绘制 RMSE vs. DTW来计算RdR score,绘制的图如下所示: + +![](computer_sci/deep_learning_and_machine_learning/Evaluation/attachments/Pasted%20image%2020230529130856.png) + + +计算矩阵面积来计算RdR score,(文章里并没有完整介绍计算,在[github code](https://github.com/CoteDave/blog/tree/master/RdR%20score)里有,并不确定) + +# Reference + +* M.Sc, Dave Cote. “RdR Score Metric for Evaluating Time Series Forecasting Models.” _Medium_, 8 Feb. 2022, https://medium.com/@dave.cote.msc/rdr-score-metric-for-evaluating-time-series-forecasting-models-1c23f92f80e7. +* JJ. “MAE and RMSE — Which Metric Is Better?” _Human in a Machine World_, 23 Mar. 2016, https://medium.com/human-in-a-machine-world/mae-and-rmse-which-metric-is-better-e60ac3bde13d. +* _Accelerating Dynamic Time Warping Subsequence Search with GPU_. https://www.slideshare.net/DavideNardone/accelerating-dynamic-time-warping-subsequence-search-with-gpu. Accessed 29 May 2023. \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/DeepAR.md b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/DeepAR.md new file mode 100644 index 000000000..a46307a5c --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/DeepAR.md @@ -0,0 +1,77 @@ +--- +title: DeepAR - Time Series Forcasting +tags: +- deep-learning +- model +- time-series-dealing +--- + +DeepAR, an autoregressive recurrent network developed by Amazon, is the first model that could natively work on multiple time-series. It's a milestone in time-series community. + +# What is DeepAR + +> [!quote] +> DeepAR is the first successful model to combine Deep Learning with traditional Probabilistic Forecasting. + +* **Multiple time-series support** +* **Extra covariates**: *DeepAR* allows extra features, covariates. It is very important for me when I learn *DeepAR*, because in my task, I have corresponding feature for each time series. +* **Probabilistic output**:  Instead of making a single prediction, the model leverages [**quantile loss**](computer_sci/deep_learning_and_machine_learning/Trick/quantile_loss.md) to output prediction intervals. +* **“Cold” forecasting:** By learning from thousands of time-series that potentially share a few similarities, _DeepAR_ can provide forecasts for time-series that have little or no history at all. + +# Block used in DeepAR + +* [LSTM](computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md) + +# *DeepAR* Architecture + +DeepAR模型并不直接使用LSTMs去计算prediction,而是去估计Gaussian likelihood function的参数,即$\theta=(\mu,\sigma)$,估计Gaussian likelihood function的mean和standard deviation。 + +## Training Step-by-Step + +![](computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted%20image%2020230523134255.png) + +假设目前我们在time-series $i$ 的 t 时刻, + +1. LSTM cell会输入covariates $x_{i,t}$,即$x_i$在t时刻的值,还有上一时刻的target variable,$z_{i,t-1}$,LSTM还需要输入上一时刻的隐藏状态$h_{i,t-1}$ +2. LSTM紧接着就会输出当前的hidden state $h_{i,t}$,会输入到下一步中 +3. Gaussian likelihood function里的parameter,$\mu$和$\sigma$会从$h_{i,t}$中不直接计算出,计算细节在后面 + +> [!quote] +> 换言之,这个模型是为了得到最好的$\mu$和$\sigma$去构建gaussian distribution,让预测更接近$z_{i,t}$;同时,因为*DeepAR*每次都是train and predicts a single data point,所以这个模型也被称为autoregressive模型 + + +## Inference Step-by-Step + + +![](computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted%20image%2020230523141219.png) + + +在使用model进行预测的时候,某一改变的就是使用预测值$\hat{z}$ 代替真实值$z$,同时$\hat{z}_{i,t}$是在我们模型学习到的Gaussian distribution里sample得到的,而这个Gaussian distribution里的参数$\mu$和$\sigma$并不是model直接学习到的,*DeepAR*如何做到这一点的呢? + +# Gaussian Likelihood + +$$ +\ell_G(z|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp{(-\frac{(z-\mu)^2)}{2\sigma^2}} +$$ + +Estimate gaussian distribution的任务一般会被转化成maximize gaussian log-likelihood function的任务,即**MLEformulas**(maximum log-likelihood estimators) +**Gaussian log-likelihood function**: + +$$ +\mathcal{L} = \sum_{i=1}^{N}\sum_{t=t_o}^{T} \log{\ell(z_{i,t}|\theta(h_{i,t}))} +$$ + + +# Parameter estimation in *DeepAR* + + +在统计学中,预估Gaussian Distribution一般使用MLEformulas,但是在*DeepAR*中,并不这么去做,而是使用两个dense layer去做预估,如下图: + +![](computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted%20image%2020230523151201.png) + +使用dense layer的方式去预估Gaussian distribution的原因在于,可以使用backpropagation + + +# Reference + +* [https://towardsdatascience.com/deepar-mastering-time-series-forecasting-with-deep-learning-bc717771ce85](https://towardsdatascience.com/deepar-mastering-time-series-forecasting-with-deep-learning-bc717771ce85) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/Famous_Model_MOC.md b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/Famous_Model_MOC.md new file mode 100644 index 000000000..73ae5b214 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/Famous_Model_MOC.md @@ -0,0 +1,11 @@ +--- +title: Famous Model MOC +tags: +- deep-learning +- MOC +--- + +# Time-series + +* [DeepAR](computer_sci/deep_learning_and_machine_learning/Famous_Model/DeepAR.md) + diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/Temporal_Fusion_Transformer.md b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/Temporal_Fusion_Transformer.md new file mode 100644 index 000000000..d4dc84e72 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/Temporal_Fusion_Transformer.md @@ -0,0 +1,8 @@ +--- +title: Temporal Fusion Transformer +tags: +- deep-learning +- model +- time-series-dealing +--- + diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523134253.png b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523134253.png new file mode 100644 index 000000000..60ad80e43 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523134253.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523134255.png b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523134255.png new file mode 100644 index 000000000..60ad80e43 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523134255.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523141219.png b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523141219.png new file mode 100644 index 000000000..a0987e1df Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523141219.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523151201.png b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523151201.png new file mode 100644 index 000000000..4911ae0e1 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Famous_Model/attachments/Pasted image 20230523151201.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/LLM_MOC.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/LLM_MOC.md new file mode 100644 index 000000000..1a3a92952 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/LLM_MOC.md @@ -0,0 +1,25 @@ +--- +title: Large Language Model(LLM) - MOC +tags: +- deep-learning +- LLM +- NLP +--- + +# Training + +* [Training Tech Outline](computer_sci/deep_learning_and_machine_learning/LLM/train/steps.md) +* [⭐⭐⭐Train LLM from scratch](computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md) +* [⭐⭐⭐Detailed explanation of RLHF technology](computer_sci/deep_learning_and_machine_learning/LLM/train/RLHF.md) +* [How to do use fine tune tech to create your chatbot](computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/how_to_fine_tune.md) +* [Learn finetune by Stanford Alpaca](computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md) + +# Metrics + +How to evaluate a LLM performance? + +* [Tasks to evaluate BERT - Maybe can be deployed in other LM](computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md) + +# Basic + +* [LLM Hyperparameter](computer_sci/deep_learning_and_machine_learning/LLM/basic/llm_hyperparameter.md) diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/1687853622172.mp4 b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/1687853622172.mp4 new file mode 100644 index 000000000..248c3b417 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/1687853622172.mp4 differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627160123.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627160123.png new file mode 100644 index 000000000..72e7c63b5 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627160123.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627160125.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627160125.png new file mode 100644 index 000000000..72e7c63b5 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627160125.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627162848.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627162848.png new file mode 100644 index 000000000..b8612d971 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627162848.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627163514.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627163514.png new file mode 100644 index 000000000..81f16a195 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627163514.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627165311.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627165311.png new file mode 100644 index 000000000..5163f06f8 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted image 20230627165311.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/physic_temp.gif b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/physic_temp.gif new file mode 100644 index 000000000..a2335b731 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/physic_temp.gif differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/rating_probabililty.gif b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/rating_probabililty.gif new file mode 100644 index 000000000..780d3de8b Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/rating_probabililty.gif differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/llm_hyperparameter.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/llm_hyperparameter.md new file mode 100644 index 000000000..a0f9deed9 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/basic/llm_hyperparameter.md @@ -0,0 +1,56 @@ +--- +title: LLM hyperparameter +tags: +- hyperparameter +- LLM +- deep-learning +- basic +--- + +# LLM Temperature + +Temperature definition come from the physical meaning of temperature. The more higher temperature, the atoms moving more faster, meaning more randomness. + +![](computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/physic_temp.gif) + +LLM temperature is a hyperparameter that regulates **the randomness, or creativity.** + +* Higher the LLM temperature, more diverse and creative, increasing likelihood of straying from context. +* Lower the LLM temperature, more focused and deterministic, sticking closely to the most likely prediction + +![](computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted%20image%2020230627160125.png) + +## More detail + +The LLM model is to give a probability of next word, like this: + +![](computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted%20image%2020230627162848.png) + +"A cat is chasing a …", there are lots of words can be filled in that blank. Different words have different probabilities, in the model, we output the next word ratings. + +Sure, we can always pick the highest rating word, but that would result in very standard predictable boring sentences, and the model wouldn't be equivalent to human language, because we don't always use the most common word either. + +So, we want to design a mechanism that **allows all words with a decent rating to occur with a reasonable probability**, that's why we need temperature in LLM model. + +Like real physic world, we can do samples to describe the distribution, *we use SoftMax to describe the distribution of the probability of the next word*. The temperature is the element $T$ in the formula: + +$$ +p_i = \frac{\exp{(\frac{R_i}{T})}}{\sum_i \exp{(\frac{R_i}{T})}} +$$ + +![](computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted%20image%2020230627163514.png) + +More lower the $T$, the higher rating word's probability will goes to 100%, and more higher the $T$, the probability will be more smoother for very words. + +*The gif below is important and intuitive.* + +![](computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/rating_probabililty.gif) + +So, set different $T$, the next word's probability will be changed, we will output next word depending on the probability. + +![](computer_sci/deep_learning_and_machine_learning/LLM/basic/attachments/Pasted%20image%2020230627165311.png) + +# Reference + +* [LLM Temperature, dedpchecks](https://deepchecks.com/glossary/llm-parameters/#:~:text=One%20intriguing%20parameter%20within%20LLMs,of%20straying%20from%20the%20context.) +* [⭐⭐⭐https://www.youtube.com/watch?v=YjVuJjmgclU](https://www.youtube.com/watch?v=YjVuJjmgclU) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/langchain/attachments/Pasted image 20230627154149.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/langchain/attachments/Pasted image 20230627154149.png new file mode 100644 index 000000000..520150e74 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/langchain/attachments/Pasted image 20230627154149.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/langchain/langchain_basic.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/langchain/langchain_basic.md new file mode 100644 index 000000000..cee907c98 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/langchain/langchain_basic.md @@ -0,0 +1,44 @@ +--- +title: LangChain Explained +tags: +- LLM +- basic +- langchain +--- + +# What is LangChain + +LangChain is an open source framework that allows AI developers to combine LLMs like GPT-4 *with external sources of computation and data*. + +# Why LangChain + +LangChain can make LLM answer question depending on your own documents. It can help you doing lots of amazing apps. + +You can use LangChain to make GPT to do analysis on your own company data, booking flight depending on schedule. summarizing abstract on bunches of PDFs, .…. + +# LangChain value propositions + +## Components + +* LLM Wrappers +* Prompt Templates +* Indexes for relevant information retrieval + +## Chains + +Assemble components to solve a specific task - finding info in a book... + +## Agents + +Agents allow LLMs to interact with it's environment. - For instance, make API request with a specific action + +# LangChain Framework + +![](computer_sci/deep_learning_and_machine_learning/LLM/langchain/attachments/Pasted%20image%2020230627154149.png) + + + +# Reference + +* [https://www.youtube.com/watch?v=aywZrzNaKjs](https://www.youtube.com/watch?v=aywZrzNaKjs) +* \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted image 20230629140914.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted image 20230629140914.png new file mode 100644 index 000000000..769437eed Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted image 20230629140914.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted image 20230629140929.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted image 20230629140929.png new file mode 100644 index 000000000..769437eed Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted image 20230629140929.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md new file mode 100644 index 000000000..d10d8f3a6 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md @@ -0,0 +1,36 @@ +--- +title: Tasks to evaluate BERT - Maybe can be deployed in other LM +tags: +- LLM +- metircs +- deep-learning +- benchmark +--- + +# Overview + +![](computer_sci/deep_learning_and_machine_learning/LLM/metircs/attachments/Pasted%20image%2020230629140929.png) + +# MNLI-m (Multi-Genre Natural Language Inference - Matched): + +MNLI-m is a benchmark dataset and task for natural language inference (NLI). The goal of NLI is to determine the logical relationship between two given sentences: whether the relationship is "entailment," "contradiction," or "neutral." MNLI-m focuses on matched data, which means the sentences are drawn from the same genres as the sentences in the training set. It is part of the GLUE (General Language Understanding Evaluation) benchmark, which evaluates the performance of models on various natural language understanding tasks. + +# QNLI (Question Natural Language Inference): + +QNLI is another NLI task included in the GLUE benchmark. In this task, the model is given a sentence that is a premise and a sentence that is a question related to the premise. The goal is to determine whether the answer to the question can be inferred from the given premise. The dataset for QNLI is derived from the Stanford Question Answering Dataset (SQuAD). + +# MRPC (Microsoft Research Paraphrase Corpus): + +MRPC is a dataset used for paraphrase identification or semantic equivalence detection. It consists of sentence pairs from various sources that are labeled as either paraphrases or not. The task is to classify whether a given sentence pair expresses the same meaning (paraphrase) or not. MRPC is also part of the GLUE benchmark and helps evaluate models' ability to understand sentence similarity and equivalence. + +# SST-2 (Stanford Sentiment Treebank - Binary Sentiment Classification): + +SST-2 is a binary sentiment classification task based on the Stanford Sentiment Treebank dataset. The dataset contains sentences from movie reviews labeled as either positive or negative sentiment. The task is to classify a given sentence as expressing a positive or negative sentiment. SST-2 is often used to evaluate the ability of models to understand and classify sentiment in natural language. + +# SQuAD (Stanford Question Answering Dataset): + +SQuAD is a widely known dataset and task for machine reading comprehension. It consists of questions posed by humans on a set of Wikipedia articles, where the answers to the questions are spans of text from the corresponding articles. The goal is to build models that can accurately answer the questions based on the provided context. SQuAD has been instrumental in advancing the field of question answering and evaluating models' reading comprehension capabilities. + +Overall, these tasks and datasets serve as benchmarks for evaluating natural language understanding and processing models. They cover a range of language understanding tasks, including natural language inference, paraphrase identification, sentiment analysis, and machine reading comprehension. + + diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/RLHF.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/RLHF.md new file mode 100644 index 000000000..8f72b3060 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/RLHF.md @@ -0,0 +1,65 @@ +--- +title: Reinforcement Learning from Human Feedback +tags: +- LLM +- deep-learning +- RLHF +- LLM-training-method +--- + + +# Review: Reinforcement Learning Basics + +![](computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted%20image%2020230628145009.png) + + +Reinforcement learning is a mathematical framework. + +Demystify the reinforcement learning model, it's a open-ended model using reward function to optimize agent to solve complex task in target environment. + + + +# Step by Step + +For RLHF training method, here are three core steps: + +1. Pretraining a language model +2. Gathering data(问答数据) and training a reward model +3. Fine-tuning the LM with reinforcement learning + +## Step 1. Pretraining Language Models + +Read this to learn how to train a LM: + +[Pretraining language models](computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md) + +OpenAI used a smaller version of GPT-3 for its first popular RLHF model - InstructGPT. + +Nowadays, RLHF is new area, there's no answer to which model is the best for starting point of RLHF and using expensive augmented data to fine-tune is not necessarily. + +## Step 2. Reward model training + +In reward model, we integrate human preferences into the system. + +![](computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted%20image%2020230629145231.png) + + + +# Reference + +* [Reinforcement Learning from Human Feedback: From Zero to chatGPT, YouTube, HuggingFace](https://www.youtube.com/watch?v=2MBJOuVq380) +* [Hugging Face blog, ChatGPT 背后的“功臣”——RLHF 技术详解](https://huggingface.co/blog/zh/rlhf) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628145009.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628145009.png new file mode 100644 index 000000000..991eeb711 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628145009.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628160836.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628160836.png new file mode 100644 index 000000000..a8b01f8e8 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628160836.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628161627.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628161627.png new file mode 100644 index 000000000..67f495fa0 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230628161627.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230629104307.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230629104307.png new file mode 100644 index 000000000..c421de4fb Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230629104307.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230629145231.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230629145231.png new file mode 100644 index 000000000..3be6d002d Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted image 20230629145231.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/dataset/make_custom_dataset.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/dataset/make_custom_dataset.md new file mode 100644 index 000000000..483defd34 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/dataset/make_custom_dataset.md @@ -0,0 +1,8 @@ +--- +title: How to make custom dataset? +tags: +- dataset +- LLM +- deep-learning +--- + diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/attachments/Pasted image 20230627145954.png b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/attachments/Pasted image 20230627145954.png new file mode 100644 index 000000000..46490c09f Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/attachments/Pasted image 20230627145954.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/how_to_fine_tune.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/how_to_fine_tune.md new file mode 100644 index 000000000..b5ed6332e --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/how_to_fine_tune.md @@ -0,0 +1,7 @@ +--- +title: How to do use fine tune tech to create your chatbot +tags: +- deep-learning +- LLM +--- + diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md new file mode 100644 index 000000000..ee88ff0e9 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md @@ -0,0 +1,19 @@ +--- +title: Learn finetune by Stanford Alpaca +tags: +- deep-learning +- LLM +- fine-tune +- LLaMA +--- + +![](computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/attachments/Pasted%20image%2020230627145954.png) + + + + + +# Reference + +* [https://www.youtube.com/watch?v=pcszoCYw3vc](https://www.youtube.com/watch?v=pcszoCYw3vc) +* [https://crfm.stanford.edu/2023/03/13/alpaca.html](https://crfm.stanford.edu/2023/03/13/alpaca.html) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/steps.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/steps.md new file mode 100644 index 000000000..d31c00085 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/steps.md @@ -0,0 +1,24 @@ +--- +title: LLM training steps +tags: +- LLM +- deep-learning +--- + +训练大型语言模型(LLM)的方法通常涉及以下步骤: + +1. **数据收集**:收集大规模的文本数据作为训练数据。这些数据可以是互联网上的文本、书籍、文章、新闻、对话记录等。数据的质量和多样性对于训练出高质量的LLM非常重要。 + +2. **预处理**:对数据进行预处理以使其适合模型训练。这包括分词(将文本划分为词或子词单元)、建立词汇表(将词映射到数字表示)、清理和规范化文本等操作。 + +3. **构建模型架构**:选择适当的模型架构来构建LLM。目前最常用的模型架构是Transformer,其中包含多层的自注意力机制和前馈神经网络层。 + +4. **预训练**:使用大规模的文本数据集对模型进行预训练。预训练是指在无监督的情况下,通过让模型学习预测缺失的词语或下一个词语等任务来提取语言知识。这使得模型能够学习到丰富的语言表示。 + +5. **微调(Fine-tuning)**:在预训练之后,使用特定的任务数据对模型进行微调。微调是指在特定任务的标注数据上进行有监督的训练,例如文本生成、问题回答等。通过微调,模型可以更好地适应特定任务的要求。 + +6. **超参数调优**:调整模型的超参数,例如学习率、批量大小、模型层数等,以获得更好的性能和效果。 + +7. **评估和迭代**:对训练后的模型进行评估,并根据评估结果进行迭代改进。这可能包括调整模型架构、增加训练数据、调整训练策略等。 + +这些步骤通常是迭代进行的,通过不断的训练和改进,使LLM能够在各种自然语言处理任务中展现出更好的性能和生成能力。值得注意的是,LLM的训练需要大量的计算资源和时间,并且通常由专业团队在大规模的计算环境中进行。 \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md new file mode 100644 index 000000000..5cea2bae0 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md @@ -0,0 +1,143 @@ +--- +title: Train LLM from scratch +tags: +- LLM +- LLM-training-method +- deep-learning +--- + +# Find a dataset + +Find a corpus of text in language you prefer. +* Such as [OSCAR](https://oscar-project.org/) + +Intuitively, the more data you can get to pretrain on, the better results you will get. + +# Train a tokenizer + +There are something you need take into consideration when train a tokenizer + +## Tokenization + +You can read more detailed post - [Tokenization](computer_sci/deep_learning_and_machine_learning/NLP/basic/tokenization.md) + +Tokenization is the process of **breaking text into words of sentences**. These tokens helps machine to learn context of the text. This helps in *interpreting the meaning behind the text*. Hence, tokenization is *the first and foremost process while working on the text*. Once the tokenization is performed on the corpus, the resulted tokens can be used to prepare vocabulary which can be used for further steps to train the model. + +Example: + +“The city is on the river bank” -> “The”, ”city”, ”is”, ”on”, ”the”, ”river”, ”bank” + +Here are some typical tokenization: +* Word ( White Space ) Tokenization +* Character Tokenization +* **Subword Tokenization (SOTA)** + + +Subword Tokenization can handle OOV(Out Of Vocabulary) problem effectively. + +### Subword Tokenization Algorithm + +* **Byte pair encoding** *(BPE)* +* **Byte-level byte pair encoding** +* **WordPiece** +* **unigram** +* **SentencePiece** + +## Word embedding + +After tokenization, we make our text into token. We also wants to present token in math type. Here we use word embedding technique, converting word to math. + +Here are some typical word embedding algorithms: + +* **Word2Vec** + * skip-gram + * continuous bag-of-words (CBOW) +* **GloVe** (Global Vectors for Word Representations) +* **FastText** +* **ELMo** (Embeddings from Language Models) +* **BERT** (Bidirectional Encoder Representations from Transformers) + * a language model rather than a traditional word embedding algorithm. **While BERT does generate word embeddings as a byproduct of its training process**, its primary purpose is to learn contextualized representations of words and text segments. + +# Train a language model from scratch + +We need clear the definition of language model. + +## Language model definition + +Simply to say, the language model is a computational model or algorithm that is designed to understand and generate human language. It is a type of artificial intelligence(AI) model that uses *statistical and probabilistic techniques to predict and generate sequences of words and sentences*. + +It captures the statistical relationships between words or characters and *builds a probability distribution of the likelihood of a particular word or sequence of words appearing in a given context.* + +Language model can be used for various NLP tasks, including machine translation, speech recognition, text generation and so on.... + +As usual, a language model takes a seed input or prompt and uses its *learned knowledge of language(model weights)* to predict most likely words or characters to follow. + +The SOTA of language model today is GPT-4. + +## Language model algorithm + + +### Classical LM + +* **n-gram** + * N-gram can be used as *both a tokenization algorithm and a component of a language model*. In my searching experience, n-grams are easier to understand as a language model to predict a likelihood distribution. +* **HMMs** (Hidden Markov Models) +* **RNNs** (Recurrent Neural Networks) + +### Cutting-edge + +* **GPT** (Generative Pre-trained Transformer) +* **BERT** (Bidirectional Encoder Representations from Transformers) +* **T5** (Text-To-Text Transfer Transformer) +* **Megatron-LM** + +## Train Method + +Different designed models usually have different training methods. Here we take BERT-like model as example. + +### BERT-Like model + +![](computer_sci/deep_learning_and_machine_learning/LLM/train/attachments/Pasted%20image%2020230629104307.png) + +To train BERT-Like model, we'll train it on a task of **Masked Language Modeling**(MLM), i.e. the predict how to fill arbitrary tokens that we randomly mask in the dataset. + +Also, we'll train BERT-Like model using **Next Sentence Prediction** (NSP). *MLM teaches BERT to understand relationships between words and NSP teaches BERT to understand long-term dependencies across sentences.* In NSP training, give BERT two sentences, A and B, then BERT will determine B is A's next sentence or not, i.e. outputting `IsNextSentence` or `NotNextSentence` + +With NSP training, BERT will have better performance. + +| Task | MNLI-m (acc) | QNLI (acc) | MRPC (acc) | SST-2 (acc) | SQuAD (f1) | +| --- | --- | --- | --- | --- | --- | +| With NSP | 84.4 | 88.4 | 86.7 | 92.7 | 88.5 | +| Without NSP | 83.9 | 84.9 | 86.5 | 92.6 | 87.9 | + +[Table source](https://arxiv.org/pdf/1810.04805.pdf) +[Table metrics explain](computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md) + + +# Check LM actually trained + +## Take BERT as example + +Aside from looking at the training and eval losses going down, we can check our model using `FillMaskPipeline`. + +This is a method input *a masked token (here, ``) and return a list of the most probable filled sequences, with their probabilities.* + +With this method, we can see our LM captures more semantic knowledge or even some sort of (statistical) common sense reasoning. + +# Fine-tune our LM on a downstream task + +Finally, we can fine-tune our LM on a downstream task such as translation, chatbot, text generation and so on. + +Different downstream task may need different methods to do fine-tune. + +# Example + +[https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=G-kkz81OY6xH](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=G-kkz81OY6xH) + + +# Reference + +* [HuggingFace blog, How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train) +* [Medium blog, NLP Tokenization](https://medium.com/nerd-for-tech/nlp-tokenization-2fdec7536d17) +* [Radford, A., Narasimhan, K., Salimans, T. & Sutskever, I. (2018). Improving language understanding by generative pre-training. , .](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) + diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/Model_Interpretability_MOC.md b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/Model_Interpretability_MOC.md new file mode 100644 index 000000000..b1b56f005 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/Model_Interpretability_MOC.md @@ -0,0 +1,9 @@ +--- +title: Model Interpretability - MOC +tags: +- MOC +- deep-learning +- interpretability +--- + +* [SHAP](computer_sci/deep_learning_and_machine_learning/Model_interpretability/SHAP.md) diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/SHAP.md b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/SHAP.md new file mode 100644 index 000000000..ad4a91d78 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/SHAP.md @@ -0,0 +1,193 @@ +--- +title: SHAP - a reliable way to analyze model interpretability +tags: +- deep-learning +- interpretability +- algorithm +--- + +SHAP is the most popular model-agnostic technique that is used to explain predictions. SHAP stands for **SH**apley **A**dditive ex**P**lanations + +Shapely values are obtained by incorporating concepts from *Cooperative Game Theory*  and *local explanations* + +# Mathematical and Algorithm Foundation + +## Shapely Values + +Shapely values were from game theory and invented by Lloyd Shapley. Shapely values were invented to be a way of providing a fair solution to the following question: + +> [!question] +> If we have a coalition **C** that collaborates to produce a value **V**: How much did each individual member contribute to the final value + +The method here we assess each individual member’s contribution is to removing each member to get a new coalition and then compare their production, like this graphs: + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329165429.png) + +And then, we get every member 1 included or not included coalitions like this: + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329165523.png) + +Using left value - right value, we can get difference like image left above; And then we calculate the mean of them: + +$$ +\varphi_i=\frac{1}{\text{Members}}\sum_{\forall \text{C s.t. i}\notin \text{C}} \frac{\text{Marginal Contribution of i to C}}{\text{Coalitions of size |C|}} +$$ + +## Shapely Additive Explanations + +We need to know what’s **additive** mean here. Lundberg and Lee define an additive feature attribution as follows: + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329165623.png) + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329165818.png) + +$x'$, the simplified local inputs usually means that we turn a feature vector into a discrete binary vector, where features are either included or excluded. Also, the $g(x')$ should take this form: + +$$ +g(x')=\varphi_0+\sum_{i=1}^N \varphi_i {x'}_i +$$ + +* $\varphi_0$ is the **null output** of this model, that is, the **average output** of this model +- $\varphi_i$ is **feature affect**, is how much that feature changes the output of the model, introduced above. It’s called **attribution** + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329165840.png) + +Now Lundberg and Lee go on to describe a set of three desirable properties of such an additive feature method, **local accuracy**, **missingness**, and **consistency**. + +### Local accuracy + +$$ +g(x')\approx f(x) \quad \text{if} \quad x'\approx x +$$ + +### Missingness + +$$ +{x_i}' = 0 \rightarrow \varphi_i = 0 +$$ + +if a feature excluded from the model. it’s attribution must be zero; that is, the only thing that can affect the output of the explanation model is the inclusion of features, not the exclusion. + +### Consistency + +If feature contribution changes, the feature effect cannot change in the opposite direction + +# Why SHAP + +Lee and Lundberg in their paper argue that only SHAP satisfies all three properties if **the feature attributions in only additive explanatory model are specifically chosen to be the shapley values of those features** + +# SHAP, step-by-step Process, same as shap.explainer + +For example, we consider a ice cream shop in the airport, it has four features we can know to predict his business. + +$$ +\begin{bmatrix} +\text{temperature} & \text{day of weeks} & \text{num of flights} & \text{num of hours} +\end{bmatrix} +\\ +\rightarrow \\ +\begin{bmatrix} +T & D & F & H +\end{bmatrix} +$$ + +For, example, we want to know the temperature 80 in sample [80 1 100 4] shapley value, here’s the step + +- Step 1. Get random permutation of features, and give a bracket to the feature we care and everything in its right. (manually) + +$$ +\begin{bmatrix} +F & D & \underbrace{T \quad H} +\end{bmatrix} +$$ + +- Step 2. Pick random sample from dataset + +For example, [200 5 70 8], form: [F D T H] + +- Step 3. Form vectors $x_1 \quad x_2$ + +$$ +x_1=[100 \quad 1 \quad 80 \quad \color{#BF40BF} 8 \color{#FFFFFF}] +$$ + +$x_1$ is partially from original sample and partially from the random chosen one, the feature in bracket will from random chosen one, exclude what we care + +$$ +x_2 = [100 \quad 1 \quad \color{#BF40BF} 70 \quad 8 \color{#FFFFFF}] +$$ + +$x_2$ just change the feature we care into the same as random chosen one’s feature value + +Then, calculate the diff and record + +$$ +DIFF = c_1 - c_2 +$$ + +- Step 4. Record the diff & return to step 1. and repeat many times + +$$ +\text{SHAP}(T=80 | [80 \quad 1 \quad 100 \quad 4]) = \text{average(DIFF)} +$$ + +# Shapley kernel + +## Too many coalitions need to be sampled + +Like we introduce shapley values above, for each $\varphi_i$ we need to sample a lot of coalitions to compute the difference. + +For 4 features, we need 64 total coalitions to sample; For 32 features, we need 17.1 billion coalitions to sample. + +It’s entirely untenable. + +So, to get over this difficulty, we need devise a **shapley kernel**, and that’s how the Lee and Lundberg do + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329181956.png) + +## Detail +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329182011.png) + +Though most of ML models won’t just let you omit a feature, what we do is define a **background dataset** B, one that contains a set of representative data points that model was trained over. We then filled in out omitted feature of features with values from background dataset, while holding the features are included in the permutation fixed to their original values. We then take the average of the model output over all of these new synthetic data point as our model output for that feature permutation which we call $\bar{y}$. + +$$ +E[y_{\text{12i4}}\ \ \forall \ \text{i}\in B] = \bar{y}_{\text{124}} +$$ +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329205039.png) + +Them we have a number of samples computed in this way,like image in left. + +We can formulate this as a weighted linear regression, with each feature assigned a coefficient. + +And we can prove that, in the special choice, the coefficient can be the shaplely values. **This weighting scheme is the basis of the Shapley Kernal.** In this situation, the weighted linear regression process as a whole is Kernal SHAP. + +### Different types of SHAP + +- **Kernal SHAP** +- Low-order SHAP +- Linear SHAP +- Max SHAP +- Deep SHAP +- Tree SHAP + +![](computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted%20image%2020230329205130.png) + +### You need to notice +We can see that, we calculate shapley values using linear regression lastly. So there must be the error here, but some python packages can not give us the error bound, so it’s confusion to konw if this error come from linear regression or the data, or the model. + + +# Reference + +[Shapley Additive Explanations (SHAP)](https://www.youtube.com/watch?v=VB9uV-x0gtg) + +[SHAP: A reliable way to analyze your model interpretability](https://towardsdatascience.com/shap-a-reliable-way-to-analyze-your-model-interpretability-874294d30af6) + +[【Python可解释机器学习库SHAP】:Python的可解释机器学习库SHAP](https://zhuanlan.zhihu.com/p/483622352) + +[Shapley Values : Data Science Concepts](https://www.youtube.com/watch?v=NBg7YirBTN8) + +# Appendix + +Other methods to interprete model: + +[Papers with Code - SHAP Explained](https://paperswithcode.com/method/shap) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165406.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165406.png new file mode 100644 index 000000000..cfebf84bd Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165406.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165429.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165429.png new file mode 100644 index 000000000..cfebf84bd Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165429.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165523.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165523.png new file mode 100644 index 000000000..805c00c13 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165523.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165623.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165623.png new file mode 100644 index 000000000..d838834b3 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165623.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165818.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165818.png new file mode 100644 index 000000000..bfc00cad3 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165818.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165840.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165840.png new file mode 100644 index 000000000..c47074c02 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329165840.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329181956.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329181956.png new file mode 100644 index 000000000..a7bb26baa Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329181956.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329182011.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329182011.png new file mode 100644 index 000000000..1766fbd88 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329182011.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329205039.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329205039.png new file mode 100644 index 000000000..d9c5c634a Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329205039.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329205130.png b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329205130.png new file mode 100644 index 000000000..5c9f8b6a7 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Model_interpretability/attachments/Pasted image 20230329205130.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/NLP/basic/tokenization.md b/content/computer_sci/deep_learning_and_machine_learning/NLP/basic/tokenization.md new file mode 100644 index 000000000..86a3b2111 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/NLP/basic/tokenization.md @@ -0,0 +1,9 @@ +--- +title: Tokenization +tags: +- NLP +- deep-learning +- tokenization +- basic +--- + diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/DTW.md b/content/computer_sci/deep_learning_and_machine_learning/Trick/DTW.md new file mode 100644 index 000000000..2f073d4c2 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Trick/DTW.md @@ -0,0 +1,58 @@ +--- +title: Dynamic Time Warping (DTW) +tags: +- metrics +- time-series-dealing +- evalution +--- + +![](computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted%20image%2020230526164724.png) + +欧氏距离在时间序列之间可能是一个不好的选择,因为时间轴上存在扭曲的情况。DTW 是一个考虑到这种扭曲的,测量距离来比较两个时间序列的一个指标,本section讲解如何计算 DTW distance + +# Detail + + +## Step 1. 准备输入序列 + +假设两个time series, A & B + +## Step 2. 计算距离矩阵 + +创建一个距离矩阵,其中的元素表示序列 A 和序列 B 中每个时间点之间的距离。常见的距离度量方法包括欧氏距离、曼哈顿距离、余弦相似度等。根据你的数据类型和需求选择适当的距离度量方法。 + +## Step 3. 初始化累积距离矩阵 + +创建一个与距离矩阵大小相同的累积距离矩阵,用于存储从起点到每个位置的累积距离。将起点 (0, 0) 的累积距离设为距离矩阵的起始点距离。 + +## Step 4. 计算累积距离 + +从起点开始,按照动态规划的方式计算累积距离矩阵中每个位置的累积距离。对于每个位置 (i, j),**累积距离等于该位置的距离加上三个相邻位置中选择最小累积距离的值。** + +$$ +DTW(i, j) = d_{i,j} + \min{\{DTW(i-1,j), DTW(i, j-1), DTW(i-1, j-1)\}} +$$ + + +## Step 5. 回溯最优路径 + +从累积距离矩阵的最右下角开始,根据最小累积距离的路径回溯到起点 (0, 0)。记录下经过的路径,即为最优路径。 + +## Step 6. 计算最终距离 + +根据最优路径上的累积距离,计算出最终的 DTW 距离。 + +# Example + +![](computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted%20image%2020230526170120.png) + +左边是距离矩阵,右边是DTW矩阵,也就是累积距离矩阵 + +![](computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted%20image%2020230526170921.png) + +![](computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted%20image%2020230526171119.png) + +通过回溯,找到optimal warping path,DTW distance就是 the optimal warping path的square root,本例中就是$\sqrt{15}$ + + + diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230522151015.png b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230522151015.png new file mode 100644 index 000000000..5fe2d911f Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230522151015.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526164724.png b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526164724.png new file mode 100644 index 000000000..75965a471 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526164724.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526170120.png b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526170120.png new file mode 100644 index 000000000..f053f0852 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526170120.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526170921.png b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526170921.png new file mode 100644 index 000000000..4549c820f Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526170921.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526171119.png b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526171119.png new file mode 100644 index 000000000..41b8120bc Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted image 20230526171119.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/Trick/quantile_loss.md b/content/computer_sci/deep_learning_and_machine_learning/Trick/quantile_loss.md new file mode 100644 index 000000000..ea71c817d --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/Trick/quantile_loss.md @@ -0,0 +1,63 @@ +--- +title: Quantile loss +tags: +- loss-function +- deep-learning +- deep-learning-math +--- + +在大多数现实世界的预测问题中,我们的预测所带来的不确定性具有重要价值。相较于仅仅提供点估计,了解预测范围能够显著改善许多商业应用的决策过程。**Quantile loss**就是为例帮助我们了解预测范围的loss function。 + +Quantile loss用于衡量预测分布和目标分布之间的差异,特别适用于处理不确定性较高的预测问题。 + +# What is quantile + +[Quantile](math/Statistics/Basic/Quantile.md) + +# What is a prediction interval + + +预测区间是对预测的不确定性进行量化的一种方法。它为结果变量的估计提供了**概率上限和下限的范围**。 + +![](computer_sci/deep_learning_and_machine_learning/Trick/attachments/Pasted%20image%2020230522151015.png) + +输出本身是随机变量,因此具有分布特性。预测区间的目的在于了解结果的正确性可能性。 + +# What is Quantile Loss + +在Quantile loss中,我们将预测结果和目标值都表示为分位数形式,例如,我们可以用预测的α分位数来表示预测结果,用真实值的α分位数来表示目标值。然后,Quantile loss衡量了这两个分布之间的差异,通常使用分位数损失函数来计算。 + +分位数回归损失函数(Quantile Regression Loss)用于预测分位数(Quantile)。例如,对于分位数为0.9的预测,应该在90%的情况下做出过高的预测。 + +对于一条数据,prediction是$y_i^p$,真实值是$y_i$,mean regression loss for a quantile q: + +$$ +L(y_i^p, y_i) = \max[q(y_i^p - y_i), (q-1)(y_i^p - y_i)] +$$ + +一系列prediction数据来通过minimize这个loss function后,得到quantile - $q$ + + +## Intuitive Understanding + +在上述的回归损失方程中,由于 q 的取值范围在 0 到 1 之间,当进行过高预测($y_i^p$ > $y_i$)时,第一项将为正并占主导地位;而当进行过低预测($y_i^p$ < $y_i$)时,第二项将占主导地位。当 q 等于 0.5 时,过低预测和过高预测将受到相同的惩罚因子,从而得到中位数。q 的值越大,相比于过低预测,过高预测将受到更严厉的惩罚。例如,当 q 等于 0.75 时,过高预测将受到 0.75 的惩罚因子,而过低预测将受到 0.25 的惩罚因子。模型做出过高预测的可能性的*难度*将会是过低预测可能性的3倍,从而得到 0.75 分位数。 + +## Why Quantile loss + +> [!quote] +> **“同方差性”,“恒定方差假设”** +> +> 在最小二乘回归中,预测区间基于一个假设,即残差在自变量的各个取值上具有恒定的方差。这假设被称为“同方差性”或“恒定方差假设”。 +> +> 这个假设是基于对回归模型中误差项的性质的一种合理假设。在最小二乘回归中,我们假设因变量的观测值是由真实值和一个误差项组成的,而这个误差项是独立同分布的,即在每个自变量取值上都具有相同的分布。 +> +> 如果残差在自变量的各个取值上具有恒定的方差,意味着误差的大小不会随着自变量的变化而发生显著的变化。这样的话,我们可以使用统计方法来计算出预测区间,这个区间能够给出对未来观测值的置信度。 +> +> 然而,如果恒定方差假设不成立,也就是残差在自变量的取值上具有不同的方差,那么最小二乘回归的结果可能会出现问题。在这种情况下,预测区间可能会低估或高估预测的不确定性,导致对未来观测值的置信度估计不准确。 + +Quantile Loss Regression可以提供合理的预测区间,即使对于具有非恒定方差或非正态分布的残差也是如此 + + +# Reference + +* [Kandi, Shabeel. “Prediction Intervals in Forecasting: Quantile Loss Function.” _Analytics Vidhya_, 24 Apr. 2023, https://medium.com/analytics-vidhya/prediction-intervals-in-forecasting-quantile-loss-function-18f72501586f.](https://medium.com/analytics-vidhya/prediction-intervals-in-forecasting-quantile-loss-function-18f72501586f) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/application/color8bit_style.py b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/application/color8bit_style.py new file mode 100644 index 000000000..e27529a1d --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/application/color8bit_style.py @@ -0,0 +1,109 @@ +import cv2 +import numpy as np +import matplotlib.pyplot as plt +from tkinter import Tk, filedialog +from mpl_toolkits.mplot3d import Axes3D +from sklearn.cluster import KMeans + + +# Create a Tkinter root window +root = Tk() +root.withdraw() + +# Open a file explorer dialog to select an image file +file_path = filedialog.askopenfilename() + +# Read the selected image using cv2 +image = cv2.imread(file_path) + +# Convert the image to RGB color space +image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) + +# Get the dimensions of the image +height, width, _ = image_rgb.shape + +# Reshape the image to a 2D array of pixels, one is pixel number, one is pixel channel +pixels = image_rgb.reshape((height * width, 3)) + +# Create an empty dataset +dataset = [] + +# Iterate over each pixel and store the RGB values as a vector in the dataset +for pixel in pixels: + dataset.append(pixel) + +# Convert the dataset to a NumPy array +dataset = np.array(dataset) + +# Get the RGB values from the dataset +red = dataset[:, 0] +green = dataset[:, 1] +blue = dataset[:, 2] + + + +# plot show +''' +# Plot the histograms +plt.figure(figsize=(10, 6)) +plt.hist(red, bins=256, color='red', alpha=0.5, label='Red') +plt.hist(green, bins=256, color='green', alpha=0.5, label='Green') +plt.hist(blue, bins=256, color='blue', alpha=0.5, label='Blue') +plt.title('RGB Value Histogram') +plt.xlabel('RGB Value') +plt.ylabel('Frequency') +plt.legend() +plt.show() + + +# Plot the 3D scatter graph +fig = plt.figure(figsize=(10, 8)) +ax = fig.add_subplot(111, projection='3d') +ax.scatter(red, green, blue, c='#000000', s=1) +ax.set_xlabel('Red') +ax.set_ylabel('Green') +ax.set_zlabel('Blue') +ax.set_title('RGB Scatter Plot') +plt.show() +''' + + +# Perform k-means clustering +num_clusters = 3 # Specify the desired number of clusters +kmeans = KMeans(n_clusters=num_clusters, n_init='auto', random_state=42) +labels = kmeans.fit_predict(dataset) + + +# Show K-means Clustering result +''' +# Plot the scatter plot for each iteration of the k-means algorithm +fig = plt.figure(figsize=(10, 8)) +ax = fig.add_subplot(111, projection='3d') + +for i in range(num_clusters): + cluster_points = dataset[labels == i] + ax.scatter(cluster_points[:, 0], cluster_points[:, 1], cluster_points[:, 2], s=1) + +ax.set_xlabel('Red') +ax.set_ylabel('Green') +ax.set_zlabel('Blue') +ax.set_title('RGB Scatter Plot - K-Means Clustering') +plt.show() +''' + +center_values = kmeans.cluster_centers_.astype(int) + +for i in range(num_clusters): + dataset[labels == i] = center_values[i] + + +# Reshape the pixels array back into an image with the original dimensions and convert it to BGR color space +reshaped_image = dataset.reshape((height, width, 3)) +reshaped_image_bgr = cv2.cvtColor(reshaped_image.astype(np.uint8), cv2.COLOR_RGB2BGR) + +# Display the image using matplotlib +plt.imshow(reshaped_image) +plt.show() + +# Opencv store image +cv2.imwrite('C:/Users/BME51/Desktop/color8bit_style.jpg', reshaped_image_bgr) diff --git a/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/application/example.png b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/application/example.png new file mode 100644 index 000000000..ff3c7cb91 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/application/example.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg new file mode 100644 index 000000000..0fda57126 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/k4XcapI.gif b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/k4XcapI.gif new file mode 100644 index 000000000..ce5544e15 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/k4XcapI.gif differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/k_means.md b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/k_means.md new file mode 100644 index 000000000..227d09cd6 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/clustering/k-means/k_means.md @@ -0,0 +1,102 @@ +--- +title: K-means Clustering Algorithm +tags: +- machine-learning +- clustering +- algorithm +--- + +# Step by Step + +Our algorithm works as follows, assuming we have inputs $x_1, x_2, \cdots, x_n$ and value of $K$ + +- **Step 1** - Pick $K$ random points as cluster centers called centroids. +- **Step 2** - Assign each $x_i$ to nearest cluster by calculating its distance to each centroid. +- **Step 3** - Find new cluster center by taking the average of the assigned points. +- **Step 4** - Repeat Step 2 and 3 until none of the cluster assignments change. + +![](computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/k4XcapI.gif) + +# Implementation + +## Core code + +### Distance calculation: + +```python +# Euclidean Distance Caculator +def dist(a, b, ax=1): + return np.linalg.norm(a - b, axis=ax) +``` + + +### Generate Random Clustering center at first + +```python +# Number of clusters +k = 3 +# X coordinates of random centroids +C_x = np.random.randint(0, np.max(X)-20, size=k) +# Y coordinates of random centroids +C_y = np.random.randint(0, np.max(X)-20, size=k) +C = np.array(list(zip(C_x, C_y)), dtype=np.float32) +print(C) +``` + +### Calculate dis and tag point, then update every tag's new center + +```python +# To store the value of centroids when it updates +C_old = np.zeros(C.shape) +# Cluster Lables(0, 1, 2) +clusters = np.zeros(len(X)) +# Error func. - Distance between new centroids and old centroids +error = dist(C, C_old, None) +# Loop will run till the error becomes zero +while error != 0: + # Assigning each value to its closest cluster + for i in range(len(X)): + distances = dist(X[i], C) + cluster = np.argmin(distances) + clusters[i] = cluster + # Storing the old centroid values + C_old = deepcopy(C) + # Finding the new centroids by taking the average value + for i in range(k): + points = [X[j] for j in range(len(X)) if clusters[j] == i] + C[i] = np.mean(points, axis=0) + error = dist(C, C_old, None) +``` + +## Simple approach by scikit-learn + +```python +from sklearn.cluster import KMeans + +# Number of clusters +kmeans = KMeans(n_clusters=3) +# Fitting the input data +kmeans = kmeans.fit(X) +# Getting the cluster labels +labels = kmeans.predict(X) +# Centroid values +centroids = kmeans.cluster_centers_ + +# Comparing with scikit-learn centroids +print(C) # From Scratch +print(centroids) # From sci-kit learn +``` + +# Application + +## 8bit style + +Read image and use k-means to do clustering for pixel value. Make pic to 8bit color style. + +![](computer_sci/deep_learning_and_machine_learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg) + +[color8bit_style.py](https://github.com/PinkR1ver/Jude.W-s-Knowledge-Brain/blob/master/Deep_Learning_And_Machine_Learning/clustering/k-means/application/color8bit_style.py) + +# Reference + +* [K-Means Clustering in Python, https://mubaris.com/posts/kmeans-clustering/. Accessed 3 July 2023.](https://mubaris.com/posts/kmeans-clustering/) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/GRU.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/GRU.md new file mode 100644 index 000000000..44d842603 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/GRU.md @@ -0,0 +1,7 @@ +--- +title: Gated Recurrent Unit +tags: +- deep-learning +- time-series-dealing +--- + diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md new file mode 100644 index 000000000..8f3780639 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md @@ -0,0 +1,157 @@ +--- +title: Long Short-Term Memory Networks +tags: +- deep-learning +- time-series-dealing +- basic +--- + +> [!quote] +> When I was learning LSTM, the new deep learning block *Transformers* dominate the NLP field. However, *Transformers* don't decisively outperform LSTMS in time-series-related tasks. The main reason is that LSTMs are more adept at handling **local temporal data**. + + +LSTM的设计目标是解决传统RNN面临的长期依赖问题。传统RNN在处理长序列时,难以记住远距离的信息,因为随着时间的推移,梯度在传播过程中逐渐消失或爆炸。这使得传统RNN难以捕捉长期依赖关系,例如在自然语言处理中理解长句子的语义。 + +LSTM通过使用一种称为门控机制的技术,有效地解决了这个问题。它包含一个称为记忆单元的重要组件,这个单元可以选择性地存储、读取和删除信息。LSTM的关键在于其三个门控单元:输入门、遗忘门和输出门。 + +1. 输入门(Input Gate):决定哪些信息将被更新到记忆单元中。它使用一个Sigmoid激活函数来控制输入的重要性。 + +2. 遗忘门(Forget Gate):决定哪些信息将被从记忆单元中删除。通过使用另一个Sigmoid激活函数和一个逐元素的乘法操作,它决定了上一个记忆状态中的哪些信息保留下来。 + +3. 输出门(Output Gate):决定将哪些信息从记忆单元输出到下一个时间步。这个输出经过一个Sigmoid激活函数和一个Tanh激活函数来进行处理。 + + +这些门控单元允许LSTM选择性地记住或忘记特定的信息,从而使其能够有效地处理长序列。LSTM的网络结构使得信息可以在时间上流动,同时保留对过去信息的长期记忆。 + +# Arch + +可以通过比较传统RNN模块和LSTM模块来加深记忆 + +传统RNN网络: + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522161052.png) + + +LSTM模块: +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522161520.png) + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522161546.png) + +## Core idea + +LSTM的core idea是cell state, cell state可以被视为一个横贯整个LSTM网络的内部记忆。它类似于传统RNN中的隐藏状态,但相比之下,cell state的设计更加精细,使得LSTM能够更好地捕捉长期依赖关系。 + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522162225.png) + +cell state的更新是通过门控单元来控制的。在LSTM中,输入门、遗忘门和输出门共同决定了如何更新细胞状态。 + + +## Step-by-Step LSTM Walk Through + +### Step 1 - Throw away information + +LSTM第一步是throw away information,通过遗忘门(forget gate layer)。 + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522162536.png) + +forget gate layer 通过输入$x_t$和$h_{t-1}$,计算出$f_t$,$f_t$范围在(0,1),这个$f_t$会去乘以cell state $C_{t-1}$。1代表着“completely keep”,0代表着“completely get rid of this” + +一个好的例子,在nlp中,cell state可能包括当前主体的性别,以便可以使用正确的代词。 当我们看到一个新主题时,我们想忘记旧主题的性别。 + +### Step 2 - Decide What information we're going to store + +LSTM第二步在于决定哪些信息要被store在cell state里,这里有两个部分,第一个部分是通过"input gate layer"(输入门),计算$i_t$。第二个部分通过一个tanh layer来计算新候选值的向量 $\tilde{C}_t$。这两个部分将会用来update information in cell state + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522163353.png) + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522164237.png) + +### Step 3 - Decide output + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522164609.png) + +最终的输出回事一个filtered version of cell state,计算如上图。 + +# Variants on LSTM + +LSTM有很多变种,这里有列出来一些 + +## Adding "peephole connections" + + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522165117.png) + +在gate layer的输入中加入cell state,你可以选择在这三个门里的某些加入“peephole connection”(窥视孔连接),某些不加入。 + +加入窥视孔连接的目的是增强LSTM对细胞状态的建模能力,并更好地捕捉序列中的长期依赖关系。 + +## Use coupled forget and input gates + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522170059.png) + + +## GRU (Gated Recurrent Unit) ⭐⭐⭐ + +* [GRU](computer_sci/deep_learning_and_machine_learning/deep_learning/GRU.md) + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230522170214.png) + +GRU是著名的LSTM变种,值得另起炉灶介绍 + + +# Demo code & Pytorch version LSTM graph explain + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted%20image%2020230523164806.png) + +```python +import torch +import torch.nn as nn +import numpy as np +import matplotlib.pyplot as plt + +class LSTM(nn.Module): + def __init__(self, input_size, output_size, hidden_size, num_layers): + super(LSTM, self).__init__() + self.input_size = input_size + self.output_size = output_size + self.hidden_size = hidden_size + self.num_layers = num_layers + + self.lstm = nn.LSTM(input_size, hidden_size, num_layers) + + self.fc = nn.Linear(hidden_size, output_size) + + def forward(self, input_seq): + # input_seq: (seq_len, batch, input_size) + # lstm_out: (seq_len, batch, hidden_size) + + lstm_out, (hidden_state, cell_state) = self.lstm(input_seq) + + lstm_out = self.fc(lstm_out) + + return lstm_out, hidden_state, cell_state + + +if __name__ == '__main__': + seq = np.linspace(0, 3801, 3801) + h = torch.randn(1, 1, 64) + c = torch.randn(1, 1, 64) + + lstm = LSTM(1, 1, 64, 1) + + input = torch.Tensor(seq).view(len(seq), 1, -1) + + lstm_out, hidden_state, cell_state = lstm(input) + lstm_out = torch.squeeze(lstm_out) + + print(lstm_out.shape) + print(hidden_state.shape) + print(cell_state.shape) +``` + +# Reference + +* _Understanding LSTM Networks -- Colah’s Blog_. https://colah.github.io/posts/2015-08-Understanding-LSTMs/. Accessed 22 May 2023. +* Hochreiter, Sepp, and Jürgen Schmidhuber. “Long Short-Term Memory.” _Neural Computation_, vol. 9, no. 8, Nov. 1997, pp. 1735–80. _DOI.org (Crossref)_, https://doi.org/10.1162/neco.1997.9.8.1735. +* _Recurrent Nets That Time and Count_. https://ieeexplore.ieee.org/document/861302/. Accessed 22 May 2023. +* \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md new file mode 100644 index 000000000..8e0b0c95f --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md @@ -0,0 +1,147 @@ +--- +title: XGBoost +tags: +- deep-learning +- ensemble-learning +--- + + +XGBoost is an open-source software library that implements optimized distributed gradient boosting machine learning algorithms under the **Gradient Boosting** framework. + +# What you need to know first + +* [🚧🚧AdaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/adaBoost.md) + +# What is XGBoost + +**XGBoost**, which stands for Extreme Gradient Boosting, is a scalable, distributed **gradient-boosted** decision tree (GBDT) machine learning library. It provides parallel tree boosting and is the leading machine learning library for regression, classification, and ranking problems. + +It’s vital to an understanding of XGBoost to first grasp the machine learning concepts and algorithms that XGBoost builds upon: **supervised machine learning**, **decision trees**, **ensemble learning**, and **gradient boosting**. + +Here, we need to know **ensemble learning** and **gradient boosting,** this two thing I don’t konw before. + +## What is Ensemble Learning(集成学习) + +**Ensemble learning** is a general meta approach to machine learning that **seeks better predictive performance by combining the predictions from multiple models**. + +The three main classes of ensemble learning methods are **bagging**, **stacking**, and **boosting.** + +### Bagging + +Bagging means **Bootstrap aggregation.** It’s an ****ensemble learning method that seeks a diverse group of ensemble members by **varying the training data**. + +This typically involves using a single machine learning algorithm, almost always an unpruned decision tree, and **training each model on a different sample of the same training dataset.** The predictions made by the ensemble members are then **combined using simple statistics, such as voting or averaging.** + +Key to the method is the manner in which each sample of the dataset is prepared to train ensemble members. Each model gets its own unique sample of the dataset. + +Bagging adopts the **bootstrap distribution** for generating **different base learners**. In other words, it applies **bootstrap sampling** to obtain the data subsets for training the base learners. + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled.png) + + + +Key word of bagging method: + +- **Bootstrap Sampling** +- **Voting or averaging of predictions** +- **Unpruned decision tree** + +> Random forest is the typical example based on the bagging method. +> + +### Stacking + +Stacking means **Stacked Generalization**. It is an ensemble method that seeks a diverse group of members by **varying the model types** fit on the training data and using a model to combine predictions. + +> *Stacking is a general procedure where a learner is trained to combine the individual learners. Here, the individual learners are called the first-level learners, while the combiner is called the second-level learner, or meta-learner.* +> + +Stacking has its own nomenclature where ensemble members are referred to as **level-0 models** and the model that is used to combine the predictions is referred to as a **level-1 model**. + +The two-level hierarchy of models is the most common approach, although more layers of models can be used. For example, instead of a single level-1 model, we might have 3 or 5 level-1 models and a single level-2 model that combines the predictions of level-1 models in order to make a prediction. + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled%201.png) + +Key words of stacknig method: + +- **Unchanged training dataset** +- **Different machine learning algorithms for each ensemble member** +- **Machine learning model to learn how to best combine predictions** + +### Boosting + +**Boosting** is an ensemble method that seeks to change the training data to focus attention on examples that previous fit models on the training dataset have gotten wrong. + +> *In boosting, […] the training dataset for each subsequent classifier increasingly focuses on instances misclassified by previously generated classifiers.* +> + +The key property of boosting ensembles is the idea of **correcting prediction errors**. The models are fit and added to the ensemble sequentially such that the second model attempts to correct the predictions of the first model, the third corrects the second model, and so on. + +This typically involves the use of very simple decision trees that only make a single or a few decisions, referred to in boosting as weak learners. The predictions of the weak learners are combined using simple voting or averaging, although **the contributions are weighed proportional to their performance or capability**. The objective is to develop a so-called “***strong-learner***” from many purpose-built “***weak-learners***”. + +Typically, the training **dataset is left unchanged** and instead, the learning algorithm is modified to **pay more or less attention to specific samples based on whether they have been predicted correctly or incorrectly** by previously added ensemble members. + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled%202.png) + +Key words to boosting method: + +- **Bias training data** toward those examples that are hard to predict +- **Iteratively add ensemble members to correct predictions of prior models** +- Combine predictions **using a weighted average** of models + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled%203.png) + +Type of boosting: + +- Adaptive boosting +- Gradient boosting +- Extreme gradient boosting + +# Introduction to three main type of boosting method + +## [Adaptive boosting](https://www.notion.so/AdaBoost-8e7009e35aee4334b31d46bfd7e3dbba) + +Adaptive Boosting (AdaBoost) was one of **the earliest boosting models** developed. It adapts and tries to **self-correct** in every iteration of the boosting process. + +AdaBoost initially gives the same weight to each dataset. Then, it automatically adjusts the weights of the data points after every decision tree. It **gives more weight to incorrectly classified items** to correct them for the next round. It repeats the process until the residual error, or the difference between actual and predicted values, falls below an acceptable threshold. + +You can use AdaBoost with many predictors, and it is typically not as sensitive as other boosting algorithms. This approach does not work well when there is a correlation among features or high data dimensionality. Overall, **AdaBoost is a suitable type of boosting for classification problems**. + +**Must check Learning material below to know more detail of this algorithm. 🚧🚧🚧** + +## Gradient boosting + +Gradient Boosting (GB) is similar to AdaBoost in that it, too, is a **sequential training technique**. The difference between AdaBoost and GB is that GB does not give incorrectly classified items more weight. Instead, GB software **optimizes the loss function by generating base learners sequentially** so that **the present base learner is always more effective than the previous one**. This method **attempts to generate accurate results initially instead of correcting errors throughout the process**, like AdaBoost. For this reason, GB software can lead to more accurate results. Gradient Boosting can help with both classification and regression-based problems. + +![](computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled%204.png) + +## Extreme gradient boosting + +Extreme Gradient Boosting (XGBoost) improves gradient boosting for **computational speed and scale** in several ways. XGBoost uses multiple cores on the CPU so that learning can occur in parallel during training. It is a boosting algorithm that can handle extensive datasets, making it attractive for big data applications. The key features of XGBoost are parallelization, distributed computing, cache optimization, and out-of-core processing. + +# Reference + +## XGBoost + +* [What is XGBoost?](https://www.nvidia.com/en-us/glossary/data-science/xgboost/) + +* [XGBoost Part 1 (of 4): Regression](https://www.youtube.com/watch?v=OtD8wVaFm6E) + +## Ensemble Learning + +* [A Gentle Introduction to Ensemble Learning Algorithms - MachineLearningMastery.com](https://machinelearningmastery.com/tour-of-ensemble-learning-algorithms/) + +* [集成学习(ensemble learning)原理详解_Soyoger的博客-CSDN博客_ensemble l](https://blog.csdn.net/qq_36330643/article/details/77621232) + +* [What is Boosting? Guide to Boosting in Machine Learning - AWS](https://aws.amazon.com/what-is/boosting/) + +* [Regression Trees, Clearly Explained!!!](https://www.youtube.com/watch?v=g9c66TUylZ4&list=PLblh5JKOoLUICTaGLRoHQDuF_7q2GfuJF&index=45) + +* [AdaBoost, Clearly Explained](https://www.youtube.com/watch?v=LsK-xG1cLYA) + +* [Gradient Boost Part 1 (of 4): Regression Main Ideas](https://www.youtube.com/watch?v=3CC4N4z3GJc) diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/AdaBoost.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/adaBoost.md similarity index 100% rename from content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/AdaBoost.md rename to content/computer_sci/deep_learning_and_machine_learning/deep_learning/adaBoost.md diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/1.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/1.png new file mode 100644 index 000000000..6ef2272ec Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/1.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315195603.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315195603.png new file mode 100644 index 000000000..0ac2c5c47 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315195603.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315200009.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315200009.png new file mode 100644 index 000000000..c3c719f18 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315200009.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315201906.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315201906.png new file mode 100644 index 000000000..723b873ae Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315201906.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315202047.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315202047.png new file mode 100644 index 000000000..0d4315fff Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315202047.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315202314.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315202314.png new file mode 100644 index 000000000..9f999a893 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315202314.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205148.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205148.png new file mode 100644 index 000000000..26a0c7413 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205148.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205727.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205727.png new file mode 100644 index 000000000..7fd1cfc36 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205727.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205918.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205918.png new file mode 100644 index 000000000..060f2ab48 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315205918.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210032.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210032.png new file mode 100644 index 000000000..aeee247dc Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210032.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210631.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210631.png new file mode 100644 index 000000000..9a4d96e17 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210631.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210640.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210640.png new file mode 100644 index 000000000..7e9ff069f Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210640.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210704.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210704.png new file mode 100644 index 000000000..7e9ff069f Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230315210704.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316160103.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316160103.png new file mode 100644 index 000000000..0bc6d8945 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316160103.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316162635.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316162635.png new file mode 100644 index 000000000..2d752de3a Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316162635.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316162642.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316162642.png new file mode 100644 index 000000000..9a5f64da6 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230316162642.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230413112821.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230413112821.png new file mode 100644 index 000000000..980f2e231 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230413112821.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230413112822.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230413112822.png new file mode 100644 index 000000000..980f2e231 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230413112822.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161052.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161052.png new file mode 100644 index 000000000..6698796f9 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161052.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161520.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161520.png new file mode 100644 index 000000000..1c90ae8aa Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161520.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161546.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161546.png new file mode 100644 index 000000000..5bd046c5a Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522161546.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162225.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162225.png new file mode 100644 index 000000000..44ffe784a Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162225.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162523.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162523.png new file mode 100644 index 000000000..93cd8fe8e Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162523.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162536.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162536.png new file mode 100644 index 000000000..dc8b24ddd Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522162536.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522163338.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522163338.png new file mode 100644 index 000000000..e013f3f0a Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522163338.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522163353.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522163353.png new file mode 100644 index 000000000..9a1cc09f4 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522163353.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164229.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164229.png new file mode 100644 index 000000000..d4f8b78c6 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164229.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164237.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164237.png new file mode 100644 index 000000000..c57aa16f9 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164237.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164557.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164557.png new file mode 100644 index 000000000..337bbb502 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164557.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164609.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164609.png new file mode 100644 index 000000000..7020464f9 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522164609.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522165102.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522165102.png new file mode 100644 index 000000000..d3a5521d1 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522165102.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522165117.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522165117.png new file mode 100644 index 000000000..0b123faae Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522165117.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522170059.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522170059.png new file mode 100644 index 000000000..4ee289abc Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522170059.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522170214.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522170214.png new file mode 100644 index 000000000..ad7079675 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230522170214.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230523164806.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230523164806.png new file mode 100644 index 000000000..13f0df2ea Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Pasted image 20230523164806.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 1.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 1.png new file mode 100644 index 000000000..59f3674d5 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 1.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 2.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 2.png new file mode 100644 index 000000000..fdd4bafa4 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 2.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 3.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 3.png new file mode 100644 index 000000000..8281c60aa Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 3.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 4.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 4.png new file mode 100644 index 000000000..23849514b Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled 4.png differ diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled.png b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled.png new file mode 100644 index 000000000..d72e23841 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attachments/Untitled.png differ diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/⭐Attention.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/attention.md similarity index 100% rename from content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/⭐Attention.md rename to content/computer_sci/deep_learning_and_machine_learning/deep_learning/attention.md diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Decision_Tree.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/decision_tree.md similarity index 100% rename from content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Decision_Tree.md rename to content/computer_sci/deep_learning_and_machine_learning/deep_learning/decision_tree.md diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning/deep_learning_MOC.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/deep_learning_MOC.md new file mode 100644 index 000000000..5c0f61410 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/deep_learning_MOC.md @@ -0,0 +1,36 @@ +--- +title: Deep Learning MOC +tags: + - Catalog + - MOC +--- + + +# Attention is all you need + +* [[computer_sci/deep_learning_and_machine_learning/deep_learning/attention|Attention Blocker]] +* [[computer_sci/deep_learning_and_machine_learning/deep_learning/transformer|transformer]] + + +# Tree-like architecture + +* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/decision_tree.md) +* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/random_forest.md) +* [Deep Neural Decision Forests](computer_sci/deep_learning_and_machine_learning/deep_learning/deep_neural_decision_forests.md) +* [XGBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md) + + +# Ensemble Learning + +* [adaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/adaBoost.md) +* [XGBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md) + + +# Time-series dealing block + +* [LSTM](computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md) + +# Clustering Algorithm + + +* [K-means Clustering Algorithm](computer_sci/deep_learning_and_machine_learning/clustering/k-means/k_means.md) \ No newline at end of file diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Deep_Neural_Decision_Forests.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/deep_neural_decision_forests.md similarity index 97% rename from content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Deep_Neural_Decision_Forests.md rename to content/computer_sci/deep_learning_and_machine_learning/deep_learning/deep_neural_decision_forests.md index 2c7cd3919..dea4e9064 100644 --- a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Deep_Neural_Decision_Forests.md +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/deep_neural_decision_forests.md @@ -6,8 +6,8 @@ tags: # Background -* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/Decision_Tree.md) -* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/Random_Forest.md) +* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/decision_tree.md) +* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/random_forest.md) # What is Deep Neural Decision Forests diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Random_Forest.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/random_forest.md similarity index 91% rename from content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Random_Forest.md rename to content/computer_sci/deep_learning_and_machine_learning/deep_learning/random_forest.md index a6e39ac50..41d7fdbce 100644 --- a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Random_Forest.md +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/random_forest.md @@ -6,7 +6,7 @@ tags: # Background -* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/Decision_Tree.md) +* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/decision_tree.md) # Detail diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Transformer.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/transformer.md similarity index 80% rename from content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Transformer.md rename to content/computer_sci/deep_learning_and_machine_learning/deep_learning/transformer.md index 7bf36149a..7d75c4991 100644 --- a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Transformer.md +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning/transformer.md @@ -6,7 +6,7 @@ tags: --- > [!info] -> 在学习Transformer前,你需要学习 [⭐Attention](computer_sci/deep_learning_and_machine_learning/deep_learning/⭐Attention.md) +> 在学习Transformer前,你需要学习 [attention](computer_sci/deep_learning_and_machine_learning/deep_learning/attention.md) diff --git a/content/computer_sci/deep_learning_and_machine_learning/deep_learning_MOC.md b/content/computer_sci/deep_learning_and_machine_learning/deep_learning_MOC.md new file mode 100644 index 000000000..ec42b171c --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/deep_learning_MOC.md @@ -0,0 +1,22 @@ +--- +title: Deep Learning - MOC +tags: +- MOC +- deep-learning +--- + +# Tech Explanation + +* [⭐Deep Learning MOC](computer_sci/deep_learning_and_machine_learning/deep_learning/deep_learning_MOC.md) + +* [✨Machine Learning MOC](computer_sci/deep_learning_and_machine_learning/machine_learning/MOC.md) + +* [LLM - MOC](computer_sci/deep_learning_and_machine_learning/LLM/LLM_MOC.md) + +# Deep-learning Research + +* [Model Interpretability](computer_sci/deep_learning_and_machine_learning/Model_interpretability/Model_Interpretability_MOC.md) + +* [Famous Model - MOC](computer_sci/deep_learning_and_machine_learning/Famous_Model/Famous_Model_MOC.md) + +* [Model Evaluation - MOC](computer_sci/deep_learning_and_machine_learning/Evaluation/model_evaluation_MOC.md) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/machine_learning/MOC.md b/content/computer_sci/deep_learning_and_machine_learning/machine_learning/MOC.md new file mode 100644 index 000000000..49436a120 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/machine_learning/MOC.md @@ -0,0 +1,7 @@ +--- +title: Machine Learning MOC +tags: + - MOC + - machine-learning +--- +* [SVM](computer_sci/deep_learning_and_machine_learning/machine_learning/SVM.md) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/machine_learning/SVM.md b/content/computer_sci/deep_learning_and_machine_learning/machine_learning/SVM.md new file mode 100644 index 000000000..6aead8781 --- /dev/null +++ b/content/computer_sci/deep_learning_and_machine_learning/machine_learning/SVM.md @@ -0,0 +1,47 @@ +--- +title: Support Vector Machine +tags: + - machine-learning +--- + +# Overview + +![](computer_sci/deep_learning_and_machine_learning/machine_learning/attachments/Pasted%20image%2020230904225904.png) + +# Hyper Parameters + +## Kernel Function + +* Linear +* Polynomial +* RBF + * $\gamma$: The gamma parameter **defines the influence of each training example on the decision boundary**. A higher gamma value gives more weight to the closer points, while a lower value allows points further away to have a significant impact. Higher values of gamma can lead to overfitting, especially in datasets with noise. +## C Parameter + +The C parameter, also known as the regularization parameter, controls the trade-off between maximizing the margin and minimizing the classification error. **A smaller C value allows for a larger margin but may lead to misclassification of some training examples, while a larger C value focuses on classifying all training examples correctly but might result in a narrower margin** +## [Training Method](https://wadhwatanya1234.medium.com/multi-class-classification-one-vs-all-one-vs-one-993dd23ae7ca) + +* One-vs-All +* One-vs-One +# Detail + +## Score Function + +$$ +f(x) = \sum_i \alpha_i y_i G(x, x_i) + bias +$$ +* $\alpha_i$ is corresponding support vector weight +* $y_i$ is corresponding support vector tags +* $G(x,x_i)$ is kernel function about input sample $x$ and support vector $x_i$ +* $bias$ is bias +## Decision Function + +$$ +Decision \ Function = sign(f(x)) +$$ +We determine the sample's category by checking its decision function's sign. +# Reference + +* [“华为开发者论坛.” _Huawei_, https://developer.huawei.com/consumer/cn/forum/topic/41598169. Accessed 4 Sept. 2023.](https://developer.huawei.com/consumer/cn/forum/topic/41598169) +* [Multi-class Classification — One-vs-All & One-vs-One](https://wadhwatanya1234.medium.com/multi-class-classification-one-vs-all-one-vs-one-993dd23ae7ca) +* [Saini, Anshul. “Guide on Support Vector Machine (SVM) Algorithm.” _Analytics Vidhya_, 12 Oct. 2021, https://www.analyticsvidhya.com/blog/2021/10/support-vector-machinessvm-a-complete-guide-for-beginners/.](https://www.analyticsvidhya.com/blog/2021/10/support-vector-machinessvm-a-complete-guide-for-beginners/) \ No newline at end of file diff --git a/content/computer_sci/deep_learning_and_machine_learning/machine_learning/attachments/Pasted image 20230904225904.png b/content/computer_sci/deep_learning_and_machine_learning/machine_learning/attachments/Pasted image 20230904225904.png new file mode 100644 index 000000000..114e20539 Binary files /dev/null and b/content/computer_sci/deep_learning_and_machine_learning/machine_learning/attachments/Pasted image 20230904225904.png differ diff --git a/content/math/MOC.md b/content/math/MOC.md new file mode 100644 index 000000000..8baa7ebe4 --- /dev/null +++ b/content/math/MOC.md @@ -0,0 +1,24 @@ +--- +title: Math MOC +tags: +- math +- MOC +--- + +# Statistics + +## Basic concept + +* [Quantile](math/Statistics/Basic/Quantile.md) + +# Discrete mathematics + +## Set theory + +* [Cantor Expansion](math/discrete_mathematics/set_theory/cantor_expansion/cantor_expansion.md) + + +# Optimization Problem + + +* [Quadratic Programming](math/optimization_problem/Quadratic_Programming.md) \ No newline at end of file diff --git a/content/math/Statistics/Basic/Quantile.md b/content/math/Statistics/Basic/Quantile.md new file mode 100644 index 000000000..830d54c14 --- /dev/null +++ b/content/math/Statistics/Basic/Quantile.md @@ -0,0 +1,17 @@ +--- +title: Quantile +tags: +- math +- statistics +- basic +--- + +**分位数**(英语:Quantile),亦称**分位点**,是指用分割点(cut point)将一个随机变量的概率分布范围分为几个具有相同概率的连续区间。分割点的数量比划分出的区间少1,例如3个分割点能分出4个区间。 + +常见的分位数包括中位数(二分位数)、四分位数(四分位数)和百分位数。 + +1. 中位数:中位数是将一组数据按照大小排序后,处于中间位置的值。将数据分成两部分,有一半的观察值小于中位数,另一半的观察值大于中位数。 + +2. 四分位数:四分位数将数据分成四个等分,分别是下四分位数(25%分位数)、中位数(50%分位数)和上四分位数(75%分位数)。下四分位数是将数据排序后,处于25%位置的值;中位数是处于50%位置的值;上四分位数是处于75%位置的值。 + +3. 百分位数:百分位数将数据分成100个等分,可以表示某个特定百分比处的数据值。例如,75%的百分位数表示将数据排序后,处于75%位置的值 \ No newline at end of file diff --git a/content/math/discrete_mathematics/set_theory/cantor_expansion/cantor_expansion.md b/content/math/discrete_mathematics/set_theory/cantor_expansion/cantor_expansion.md new file mode 100644 index 000000000..f6dd54860 --- /dev/null +++ b/content/math/discrete_mathematics/set_theory/cantor_expansion/cantor_expansion.md @@ -0,0 +1,213 @@ +--- +title: Cantor expansion +tags: +- code-design +- basic +- math +- algorithm +- discrete-mathematics +- set-theory +--- + + +康托展开(Cantor expansion),也称为康托编码(Cantor encoding),是由德国数学家乔治·康托(Georg Cantor)于19世纪末提出的一种数学技术。它用于将**一个无限序列的数字(或有限序列的数字)映射到一个唯一的实数,从而实现序列的编码和排序。** + +# Objective + +康托展开与逆展开是*将全排列和它的字典序互相转化*的两种算法 + +# Application + +* 作为枚举问题的hash function + +# Step by Step + +## Deriving Cantor Expansion + +### Lemma1 and Lemma2 + +以DFS生成的4阶全排列为例~~*(no this algorithm detail here)*~~,带编号: + +```text +0 1 2 3 4 +1 1 2 4 3 +2 1 3 2 4 +3 1 3 4 2 +4 1 4 2 3 +5 1 4 3 2 +6 2 1 3 4 +7 2 1 4 3 +8 2 3 1 4 +9 2 3 4 1 +10 2 4 3 1 +11 2 4 3 1 +12 3 1 2 4 +13 ... +``` + + +可以发现,首位为 1 的全排列表示的数全部在区间 [0,5] ;首位为 2 的全排列全部在区间 [6,11];首位为 3 的则在 [12,17] ;4 的在 [18,23] 。因为首位有 4 种取值的可能,所以把所有的 4 阶全排列划分成了 4 个长度为 $\frac{4!}{4}=3!=6$ 的区间,首位为 1 的处在第 1 个这样长为 6 的区间,首位为 2 的处在第 2 个,首位为 3 的处在第 3 个…… + +> [!Lemma1] +> 衍生到一般情况,对于首位为$k$的$n$阶全排列,它所在的区间为:$[(k-1) \times (n-1)!,\quad k \times (n-1)!]$ + +在确定大致范围后,如何定位到具体的编号呢? + +观察遮住第一位的情况: + +```text +0 X 2 3 4 <==> 1 2 3 +1 X 2 4 3 <==> 1 3 2 +2 X 3 2 4 <==> 2 1 3 +3 X 3 4 2 <==> 2 3 1 +4 X 4 2 3 <==> 3 1 2 +5 X 4 3 2 <==> 3 2 1 +6 X 1 3 4 <==> 1 2 3 +7 X 1 4 3 <==> 1 3 2 +8 X 3 1 4 <==> 2 1 3 +9 X 3 4 1 <==> 2 3 1 +10 X 4 3 1 <==> 3 1 2 +11 X 4 3 1 <==> 3 2 1 +12 X 1 2 4 <==> 1 2 3 +13 ... +``` + +观察上表我们发现,**只考虑元素间的相对大小关系**(或者说各个数字——表示相对大小的符号,之间的相对大小关系),*遮掉首位的 4 阶全排列可以认为就是 3 阶全排列*,只不过它们使用的数字(表示大小的符号)不同。 + +> [!Lemma2] +> 所以我们推导$n$阶全排列对应的$(n-1)$阶全排列,如上面所示,去掉首位后,需要对每个能与**首位构成顺序的数字**(*即,比首位数字大的数*)自减少1 + +### Calculate Series index by Lemmas + +对于任意序列,迭代使用引理1和引理2就可以得到它的index。 + +#### Step +1. 利用**引理 1**确定与它同阶同首位的全排列表示的数字的范围,取左边界累加到结果上 +2. 利用**引理 2**将$n$阶全排列转化为$(n-1)$阶全排列 +3. 得到1阶全排列前,重复1,2;得到1阶全排列后输出结果; + +#### Example + +$$ +35142 \rightarrow 3\dot{5}1\dot{4}2 \rightarrow 34132 \rightarrow 341\dot{3}\dot{2} \rightarrow 34121 +$$ + +$$ +index = (3-1) \times 4! + (4-1) \times 3! + (1-1) \times 2! + (2-1) \times 1! = 67 +$$ + +## Definition + +> [!hint] +> 顺序对是由在两个在序列中的元素组成的有序对,它前项在序列中的位置比后项靠前,且前项小于后项。 + +$a_{1\cdots n}$表示一个n阶的全排列,$a_i$表示这个全排列的$i$的数字,定义$a_{1\cdots n}$的退位序列为$b_{1\cdots n}$, $b_j$等于$a_j$在全排列中作顺序对后项的顺序对个数,形式为: +$$ +\forall \ 1 \leq j \leq n, b_j = |\{(a_i, a_j) \ | \ 1 \leq i \leq j \ \text{and} \ a_i \leq a_j\}| +$$ + +其康托展开公式为: +$$ +F(a_{1\cdots n}) = \sum_{i=1}^n (a_i-b_i-1)\times(n-i)! +$$ + + +# Code + +## Method 1 + +直接用定义写出,但是不生成$b$序列,只在用到时求当时的$b_i$ + +```python +class CantorExpansion(): + def cantor_encode(self, s:list) -> int: + + ''' + Encode a list of integers to a single integer using Cantor expansion. + ''' + + count = 0 + + for i in range(len(s)): + count += self.factorial(len(s) - i - 1) * (s[i] - self.count_smaller(s, i) - 1) + + return count + + def factorial(self, x:int) -> int: + if x == 1 or x == 0: + return 1 + else: + return self.factorial(x - 1) * x + + def count_smaller(self, s:list, i:int) -> int: + count = 0 + for j in range(i): + if s[j] < s[i]: + count += 1 + return count +``` + +python file goto: [cantor_expansion.py](https://github.com/PinkR1ver/Jude.W-s-Knowledge-Brain/blob/master/Math/discrete_mathematics/set_theory/cantor_expansion/code/cantor_expansion.py) + +复杂度会是$\varTheta(n^2)$ + +## Method 2 + +再次明确顺序对的概念: + +> [!tip] +> 顺序对是指数组中的一对元素arr[i]和arr[j],其中i < j且arr[i] > arr[j] + +遍历每一个数的时候,需要计算`count_smaller(s,i)`,因此复杂度被提升到$\varTheta(n^2)$。 + +其实`count_smaller(s,i)`这个方法的目的就是计算数组的顺序对,提高计算数组顺序对的高速算法可以提高算法的复杂度。 + +树状数组(Fenwick Tree)是一种用于高效计算[前缀和(Prefix Sum)](tmp_script/prefix_sum.md)的数据结构,它可以在$O(\log{n})$的时间复杂度内完成前缀和的计算和更新操作。 + +### Trick - 使用树状数组求顺序对的详细步骤 + +Step 1: 离散化 为了方便处理,我们首先对值域数组进行离散化处理,将其转化为一个以0为起始索引的连续整数数组。离散化的目的是将原始的值域映射到一个连续的范围内,以便于在树状数组中使用。 + +Step 2: 初始化树状数组 创建一个长度为n+1的树状数组bit,并将所有元素初始化为0。这个额外的元素bit[0]不会被使用,我们只是为了方便计算。 + +Step 3: 统计顺序对 从右往左遍历离散化后的值域数组arr,对于每个元素arr[i],我们需要统计在其左侧且比它大的元素的个数。 + +在树状数组中,我们可以通过查询前缀和的方式来计算某个位置的值。因此,对于当前的元素arr[i],我们查询树状数组中索引为arr[i]的前缀和,得到的结果就是arr[i]左侧比它大的元素的个数。 + +Step 4: 更新树状数组 在统计完当前元素的顺序对后,我们需要更新树状数组,以便下一次查询能够正确计算前缀和。具体操作如下: + +- 在树状数组中,将索引为arr[i]的位置的值加1,表示arr[i]的出现次数加1。 +- 重复上述操作,直到遍历完所有元素。 + +Step 5: 计算总顺序对数 完成整个遍历后,树状数组中的每个位置的值表示该值在原始数组中的出现次数。通过查询树状数组的前缀和,我们可以计算出总的顺序对数。 + +--- + +利用上述使用树状数组求顺序对的算法就可以将Cantor Expansion复杂度降低到$\varTheta(n\log{n})$ + + +# Inverse Cantor Expansion + +逆康托展开的思想是用引理1去定位每一个位置 + +Example: + +`inverse_cantor_expansion(n=5, x=96)`: + +* Step 1. 如果是字典序从1开始,则(x - 1) = 95,说明在这个数已经有95个数 +* Step 2. floor(95 / (n-1)!) = floor(95 / 4!) = 3,说明有3个数比第一位小,所以第一位被定位为4,余数为23 +* Step 3. 剩下数字被23定位,floor(23 / 3!) = 3,余数为5,说明有3个数比第二位小,被定位为4,但是4已经出现过,因此是5 +* Step 4. 剩下的数字用5定位,floor(5 / 2!) = 2,余数为1,说明有2个数比第三位小,被定位为3。 +* Step 5. 同理,剩下第四位被定位为2,最后一位被定位为1 + +# Generalized Cantor Expansion + +TODO ... ... + +Generalized Cantor Expansion可能并不能满足双射条件 + +# Reference + +* ChatGPT +* [“【给初心者的】康托展开.” 知乎专栏, https://zhuanlan.zhihu.com/p/39377593. Accessed 6 July 2023.](https://zhuanlan.zhihu.com/p/39377593) + diff --git a/content/math/discrete_mathematics/set_theory/cantor_expansion/code/cantor_expansion.py b/content/math/discrete_mathematics/set_theory/cantor_expansion/code/cantor_expansion.py new file mode 100644 index 000000000..14f04533e --- /dev/null +++ b/content/math/discrete_mathematics/set_theory/cantor_expansion/code/cantor_expansion.py @@ -0,0 +1,63 @@ +class CantorExpansion(): + def cantor_encode(self, s:list) -> int: + + ''' + Encode a list of integers to a single integer using Cantor expansion. + ''' + + count = 0 + + for i in range(len(s)): + count += self.factorial(len(s) - i - 1) * (s[i] - self.count_smaller(s, i) - 1) + + return count + + def cantor_decode(self, x:int, n:int) -> list: + + ''' + Decode a single integer to a list of integers using Cantor expansion. + ''' + + s = [None] * n + used_dict = {} + + for num in range(1, n + 1): + used_dict[num] = False + + + iter = 0 + for i in range(n - 1, -1, -1): + + smaller = x // self.factorial(i) + x %= self.factorial(i) + + count = 0 + for i in range(1, n + 1): + if not used_dict[i]: + count += 1 + if count == smaller + 1: + s[iter] = i + used_dict[i] = True + iter += 1 + break + + return s + + def factorial(self, x:int) -> int: + if x == 1 or x == 0: + return 1 + else: + return self.factorial(x - 1) * x + + def count_smaller(self, s:list, i:int) -> int: + count = 0 + for j in range(i): + if s[j] < s[i]: + count += 1 + return count + + +if __name__ == '__main__': + s = CantorExpansion() + print(s.cantor_encode([3, 5, 7, 4, 1, 2, 9, 6, 8])) + print(s.cantor_decode(0, 9)) diff --git a/content/math/optimization_problem/Quadratic_Programming.md b/content/math/optimization_problem/Quadratic_Programming.md new file mode 100644 index 000000000..54a1fa434 --- /dev/null +++ b/content/math/optimization_problem/Quadratic_Programming.md @@ -0,0 +1,77 @@ +--- +title: Quadratic Programming +tags: + - math + - optimize + - optimization +--- + +# Why I write this note? + +[猪熊一波. _帮女朋友降维打击领导!_哔哩哔哩_bilibili_. https://www.bilibili.com/video/BV1ZN411T7c9/. Accessed 30 Nov. 2023.](https://www.bilibili.com/video/BV1ZN411T7c9/?spm_id_from=333.999.0.0&vd_source=c47136abc78922800b17d6ce79d6e19f) + +# Tips + +> [!tip] +> "Programming" in this context refers to a formal procedure for solving mathematical problems. This usage dates to the 1940s and is not specifically tied to the more recent notion of "computer programming." To avoid confusion, some practitioners prefer the term "optimization" — e.g., "quadratic optimization." +> +> "Programming" 在中文中的翻译可以为“规划”, “Quadratic Programming”的翻译为“二次规划” + +> [!Summary] +> A Quadratic Program(QP) has a quadratic objective function and linear constrains + +# Problem Formulation + +The quadratic programming problem with $n$ variables and $m$ constraints can be formulated as follows. Given: + +* a real-valued, n-dimensional vector $c$, +* an $n\times n$-dimensional real symmetric matrix $Q$, +* an $m \times n$-dimensional real matrix $A$, and +* an $m-dimensional$ real vector $b$ + +the objective of quadratic programming is to find an $n$-dimensional vector $x$, that will + +$$ +\text{minimize} \quad \mathup{\frac{1}{2} x^{T}Qx + c^{T}x}\quad +$$ +$$ +\text{subject to} \quad A\mathup{x} \preceq b +$$ +$$ +\mathup{x} = \begin{bmatrix} +x_1 \\ +x_2 \\ +\vdots \\ +x_n +\end{bmatrix}, \mathup{Q} = +\begin{bmatrix} +Q_{11} & Q_{12} & \cdots & Q_{1n} \\ +\vdots & \vdots & \ddots & \vdots \\ +Q_{n1} & Q_{n2} & \cdots & Q_{nn} +\end{bmatrix}, +\mathup{c} = \begin{bmatrix} +c_1 \\ +c_2 \\ +\vdots \\ +c_n +\end{bmatrix}, +\mathup{A} = +\begin{bmatrix} +A_{11} & A_{12} & \cdots & A_{1m} \\ +\vdots & \vdots & \ddots & \vdots \\ +A_{n1} & A_{n2} & \cdots & A_{nm} +\end{bmatrix}, +\mathup{b} = \begin{bmatrix} +b_1 \\ +b_2 \\ +\vdots \\ +b_n +\end{bmatrix} +$$ + + +# Reference + + +* [猪熊一波. _帮女朋友降维打击领导!_哔哩哔哩_bilibili_. https://www.bilibili.com/video/BV1ZN411T7c9/. Accessed 30 Nov. 2023.](https://www.bilibili.com/video/BV1ZN411T7c9/?spm_id_from=333.999.0.0&vd_source=c47136abc78922800b17d6ce79d6e19f) +* [“Quadratic Programming.” _Wikipedia_, 25 Nov. 2023. _Wikipedia_, https://en.wikipedia.org/w/index.php?title=Quadratic_programming&oldid=1186784717.](https://en.wikipedia.org/wiki/Quadratic_programming#:~:text=Quadratic%20programming%20(QP)%20is%20the,linear%20constraints%20on%20the%20variables.) diff --git a/content/math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648 1.png b/content/math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648 1.png new file mode 100644 index 000000000..694805276 Binary files /dev/null and b/content/math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648 1.png differ diff --git a/content/math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648.png b/content/math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648.png new file mode 100644 index 000000000..694805276 Binary files /dev/null and b/content/math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648.png differ diff --git a/content/math/real_analysis/attachments/78DC2683DB0DF2EFEB6215DAB8C18C25.png b/content/math/real_analysis/attachments/78DC2683DB0DF2EFEB6215DAB8C18C25.png new file mode 100644 index 000000000..02bff2bb9 Binary files /dev/null and b/content/math/real_analysis/attachments/78DC2683DB0DF2EFEB6215DAB8C18C25.png differ diff --git a/content/math/real_analysis/cauchy_principal_value.md b/content/math/real_analysis/cauchy_principal_value.md new file mode 100644 index 000000000..cd9d32d1f --- /dev/null +++ b/content/math/real_analysis/cauchy_principal_value.md @@ -0,0 +1,26 @@ +--- +title: Cauchy Principal Value +tags: + - math + - real-analysis +--- +# Notation + + +$$ +\text{p.v.} \int_{-\infty}^{\infty} f(x)dx = \lim_{a\rightarrow+\infty} \int_{-a}^{a} f(x) dx = \lim_{a\rightarrow+\infty}[f(a) - f(-a)] +$$ + + + +![](math/real_analysis/attachments/6BC0B163CEFCF127E1D70326AB7D1648%201.png) + + +![](math/real_analysis/attachments/78DC2683DB0DF2EFEB6215DAB8C18C25.png) + +the Cauchy principal value is the method for assigning values to *certain improper integrals* which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain. + +# Reference + +* [_Real Analysis 64 | Cauchy Principal Value_. _www.youtube.com_, https://www.youtube.com/watch?v=0SP2b0nFpwI. Accessed 3 Jan. 2024.](https://www.youtube.com/watch?v=0SP2b0nFpwI) +* [“Cauchy Principal Value.” _Wikipedia_, 31 Dec. 2023. _Wikipedia_, https://en.wikipedia.org/w/index.php?title=Cauchy_principal_value&oldid=1192842366.](https://en.wikipedia.org/wiki/Cauchy_principal_value) \ No newline at end of file diff --git a/content/photography/Aesthetic/Landscape/Landscape_MOC.md b/content/photography/Aesthetic/Landscape/Landscape_MOC.md new file mode 100644 index 000000000..a98f99cd2 --- /dev/null +++ b/content/photography/Aesthetic/Landscape/Landscape_MOC.md @@ -0,0 +1,9 @@ +--- +title: Landscape Photography MOC +tags: +- photography +- landscape +- MOC +--- + +* [🌊Sea MOC](photography/Aesthetic/Landscape/Sea/Sea_MOC.md) \ No newline at end of file diff --git a/content/photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md b/content/photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md new file mode 100644 index 000000000..314b32e6f --- /dev/null +++ b/content/photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md @@ -0,0 +1,29 @@ +--- +title: Sea in Fujiflm Blue +tags: +- photography +- landscape +- photography +--- + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014349.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014354.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014401.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014613.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014622.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014634.png) + +# Reference + +* [太绝了!我拍出了富士蓝!- 小红书,Philips谢骏](https://www.xiaohongshu.com/user/profile/6272c025000000002102353b/641299a200000000130129bb) + diff --git a/content/photography/Aesthetic/Landscape/Sea/Sea_MOC.md b/content/photography/Aesthetic/Landscape/Sea/Sea_MOC.md new file mode 100644 index 000000000..16ce5366a --- /dev/null +++ b/content/photography/Aesthetic/Landscape/Sea/Sea_MOC.md @@ -0,0 +1,11 @@ +--- +title: 🌊Sea MOC +tags: + - landscape + - sea + - photography + - aesthetic +--- + +* [Fujifilm Blue🌊, 小红书-Philips谢骏](photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md) +* [豊島🏝, Instagram-shiifoncake](photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md) \ No newline at end of file diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014349.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014349.png new file mode 100644 index 000000000..08673ce6e Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014349.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014354.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014354.png new file mode 100644 index 000000000..d28fa60a7 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014354.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014357.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014357.png new file mode 100644 index 000000000..a3f51b1b4 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014357.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014401.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014401.png new file mode 100644 index 000000000..a3f51b1b4 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014401.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014613.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014613.png new file mode 100644 index 000000000..cf6e3a7be Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014613.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014622.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014622.png new file mode 100644 index 000000000..ff911d8cb Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014622.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014634.png b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014634.png new file mode 100644 index 000000000..80fd8fafb Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/Pasted image 20230420014634.png differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg new file mode 100644 index 000000000..c1458466b Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n (1).jpg b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n (1).jpg new file mode 100644 index 000000000..e08a294b8 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n (1).jpg differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg new file mode 100644 index 000000000..e08a294b8 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg new file mode 100644 index 000000000..6627b41f9 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n (1).jpg b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n (1).jpg new file mode 100644 index 000000000..a8a8e5534 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n (1).jpg differ diff --git a/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg new file mode 100644 index 000000000..a8a8e5534 Binary files /dev/null and b/content/photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg differ diff --git a/content/photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md b/content/photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md new file mode 100644 index 000000000..adefb235a --- /dev/null +++ b/content/photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md @@ -0,0 +1,24 @@ +--- +title: 豊島🏝 +tags: + - photography + - sea + - landscape + - aesthetic +--- +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg) + +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n%20(1).jpg) + +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg) + +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n%20(1).jpg) + +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg) + +![](photography/Aesthetic/Landscape/Sea/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg) + + +# Reference + +* (https://www.instagram.com/p/Cqh4Ci8vV5u/)[https://www.instagram.com/p/Cqh4Ci8vV5u/] \ No newline at end of file diff --git a/content/photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md b/content/photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md new file mode 100644 index 000000000..7566691e5 --- /dev/null +++ b/content/photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md @@ -0,0 +1,9 @@ +--- +title: Polaroid Aestheic MOC +tags: +- photography +- Polaroid +- MOC +--- + +* [🖼How to show Polaroid photo in a great way](photography/Aesthetic/Polaroid/Polaroid_showcase.md) \ No newline at end of file diff --git a/content/photography/Aesthetic/Polaroid/Polaroid_showcase.md b/content/photography/Aesthetic/Polaroid/Polaroid_showcase.md new file mode 100644 index 000000000..f314724da --- /dev/null +++ b/content/photography/Aesthetic/Polaroid/Polaroid_showcase.md @@ -0,0 +1,25 @@ +--- +title: How to show Polaroid photo in a great way +tags: +- photography +- Polaroid +- share +--- + + + +![](photography/Aesthetic/Polaroid/attachments/IMG_5330.jpg) + + + +![](photography/Aesthetic/Polaroid/attachments/IMG_5329.jpg) + + + +![](photography/Aesthetic/Polaroid/attachments/IMG_5327.jpg) + + + +![](photography/Aesthetic/Polaroid/attachments/IMG_5334.jpg) + +Credits to [比扫描仪更easy的宝丽来翻拍解决方案 -BonBon的Pan](https://www.xiaohongshu.com/user/profile/6272c025000000002102353b/6331af53000000001701acfd) \ No newline at end of file diff --git a/content/photography/Aesthetic/Polaroid/attachments/IMG_5327.jpg b/content/photography/Aesthetic/Polaroid/attachments/IMG_5327.jpg new file mode 100644 index 000000000..a3e3accdd Binary files /dev/null and b/content/photography/Aesthetic/Polaroid/attachments/IMG_5327.jpg differ diff --git a/content/photography/Aesthetic/Polaroid/attachments/IMG_5329.jpg b/content/photography/Aesthetic/Polaroid/attachments/IMG_5329.jpg new file mode 100644 index 000000000..66c6239ce Binary files /dev/null and b/content/photography/Aesthetic/Polaroid/attachments/IMG_5329.jpg differ diff --git a/content/photography/Aesthetic/Polaroid/attachments/IMG_5330.jpg b/content/photography/Aesthetic/Polaroid/attachments/IMG_5330.jpg new file mode 100644 index 000000000..c78f977e0 Binary files /dev/null and b/content/photography/Aesthetic/Polaroid/attachments/IMG_5330.jpg differ diff --git a/content/photography/Aesthetic/Polaroid/attachments/IMG_5334.jpg b/content/photography/Aesthetic/Polaroid/attachments/IMG_5334.jpg new file mode 100644 index 000000000..95d870f1e Binary files /dev/null and b/content/photography/Aesthetic/Polaroid/attachments/IMG_5334.jpg differ diff --git a/content/photography/Aesthetic/Portrait/Flower_and_Girl.md b/content/photography/Aesthetic/Portrait/Flower_and_Girl.md new file mode 100644 index 000000000..b19b53e00 --- /dev/null +++ b/content/photography/Aesthetic/Portrait/Flower_and_Girl.md @@ -0,0 +1,53 @@ +--- +title: 🌸Flower & Girl +tags: +- photography +- portrait +- 摘抄 +--- + +Credits to [Marta Bevacqua](https://www.martabevacquaphotography.com/), +Thanks🌸 + +![](photography/Aesthetic/Portrait/attachments/14.jpg) + +![](photography/Aesthetic/Portrait/attachments/15.jpg) + +![](photography/Aesthetic/Portrait/attachments/16.jpg) + +![](photography/Aesthetic/Portrait/attachments/17.jpg) + +![](photography/Aesthetic/Portrait/attachments/18.jpg) + +![](photography/Aesthetic/Portrait/attachments/19.jpg) + +![](photography/Aesthetic/Portrait/attachments/20.jpg) + +![](photography/Aesthetic/Portrait/attachments/21.jpg) + +![](photography/Aesthetic/Portrait/attachments/22.jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(1).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(2).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(3).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(4).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(5).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(6).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(7).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(8).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(9).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(11).jpg) + +![](photography/Aesthetic/Portrait/attachments/content%20(12).jpg) + +![](photography/Aesthetic/Portrait/attachments/content.jpg) + diff --git a/content/photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md b/content/photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md new file mode 100644 index 000000000..2ac7f3e6a --- /dev/null +++ b/content/photography/Aesthetic/Portrait/From Korean MV Todays_Mod.md @@ -0,0 +1,35 @@ +--- +title: Cute Portrait from Korean MV +tags: +- photography +- portrait +- korean +- cute +- 摘抄 +--- + +Credits to [MV - CHEEZE(치즈) _ Today's Mood(오늘의 기분)](https://www.youtube.com/watch?v=zRq_DlEzygk), +Thanks + +Also, I see this in [摄影灵感|那有一点可爱 - by +小八怪](https://www.xiaohongshu.com/explore/63f0a27e0000000013002b05) + +![](photography/Aesthetic/Portrait/attachments/photo_4_2023-03-27_23-53-20.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_5_2023-03-27_23-53-20.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_6_2023-03-27_23-53-20.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_7_2023-03-27_23-53-20.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_8_2023-03-27_23-53-20.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_9_2023-03-27_23-53-20.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20%201.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20%201.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20%201.jpg) + +![](photography/Aesthetic/Portrait/attachments/photo_2023-03-27_23-55-45.jpg) \ No newline at end of file diff --git a/content/photography/Aesthetic/Portrait/Portrait_MOC.md b/content/photography/Aesthetic/Portrait/Portrait_MOC.md new file mode 100644 index 000000000..b9b1a49c9 --- /dev/null +++ b/content/photography/Aesthetic/Portrait/Portrait_MOC.md @@ -0,0 +1,11 @@ +--- +title: 👧Portrait +tags: +- photography +- portrait +- 摘抄 +- MOC +--- + +* [🌸Flower & Girl](photography/Aesthetic/Portrait/Flower_and_Girl.md) +* [👧🇰🇷Cute Portrait from Korean MV ](photography/Aesthetic/Portrait/From%20Korean%20MV%20Todays_Mod.md) diff --git a/content/photography/Aesthetic/Portrait/attachments/14.jpg b/content/photography/Aesthetic/Portrait/attachments/14.jpg new file mode 100644 index 000000000..633ee4f46 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/14.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/15.jpg b/content/photography/Aesthetic/Portrait/attachments/15.jpg new file mode 100644 index 000000000..ccb7a9eff Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/15.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/16.jpg b/content/photography/Aesthetic/Portrait/attachments/16.jpg new file mode 100644 index 000000000..20400eea5 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/16.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/17.jpg b/content/photography/Aesthetic/Portrait/attachments/17.jpg new file mode 100644 index 000000000..c9abd4fbe Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/17.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/18.jpg b/content/photography/Aesthetic/Portrait/attachments/18.jpg new file mode 100644 index 000000000..d3f1763e0 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/18.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/19.jpg b/content/photography/Aesthetic/Portrait/attachments/19.jpg new file mode 100644 index 000000000..d59effec5 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/19.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/20.jpg b/content/photography/Aesthetic/Portrait/attachments/20.jpg new file mode 100644 index 000000000..6d21e8ef8 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/21.jpg b/content/photography/Aesthetic/Portrait/attachments/21.jpg new file mode 100644 index 000000000..9f1f6863d Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/21.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/22.jpg b/content/photography/Aesthetic/Portrait/attachments/22.jpg new file mode 100644 index 000000000..6383a5947 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/22.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (1).jpg b/content/photography/Aesthetic/Portrait/attachments/content (1).jpg new file mode 100644 index 000000000..f4f94f3c6 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (1).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (11).jpg b/content/photography/Aesthetic/Portrait/attachments/content (11).jpg new file mode 100644 index 000000000..a7f0ea2b0 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (11).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (12).jpg b/content/photography/Aesthetic/Portrait/attachments/content (12).jpg new file mode 100644 index 000000000..c9248bb7a Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (12).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (2).jpg b/content/photography/Aesthetic/Portrait/attachments/content (2).jpg new file mode 100644 index 000000000..bdf0b0467 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (2).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (3).jpg b/content/photography/Aesthetic/Portrait/attachments/content (3).jpg new file mode 100644 index 000000000..261db4f03 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (3).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (4).jpg b/content/photography/Aesthetic/Portrait/attachments/content (4).jpg new file mode 100644 index 000000000..6fb490a8f Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (4).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (5).jpg b/content/photography/Aesthetic/Portrait/attachments/content (5).jpg new file mode 100644 index 000000000..2b80376a7 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (5).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (6).jpg b/content/photography/Aesthetic/Portrait/attachments/content (6).jpg new file mode 100644 index 000000000..8e817b63f Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (6).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (7).jpg b/content/photography/Aesthetic/Portrait/attachments/content (7).jpg new file mode 100644 index 000000000..47058680e Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (7).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (8).jpg b/content/photography/Aesthetic/Portrait/attachments/content (8).jpg new file mode 100644 index 000000000..146d5f72d Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (8).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content (9).jpg b/content/photography/Aesthetic/Portrait/attachments/content (9).jpg new file mode 100644 index 000000000..cd9075477 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content (9).jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/content.jpg b/content/photography/Aesthetic/Portrait/attachments/content.jpg new file mode 100644 index 000000000..e42ba1e44 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/content.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20 1.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20 1.jpg new file mode 100644 index 000000000..7146acc75 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20 1.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..7146acc75 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_1_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_2023-03-27_23-55-45.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_2023-03-27_23-55-45.jpg new file mode 100644 index 000000000..205f5cea5 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_2023-03-27_23-55-45.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20 1.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20 1.jpg new file mode 100644 index 000000000..268ecbc16 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20 1.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..268ecbc16 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_2_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20 1.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20 1.jpg new file mode 100644 index 000000000..ff6594b96 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20 1.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..ff6594b96 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_3_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_4_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_4_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..dcb789df1 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_4_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_5_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_5_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..b42c23e50 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_5_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335188_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335188_y.jpg new file mode 100644 index 000000000..a587b5783 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335188_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335189_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335189_y.jpg new file mode 100644 index 000000000..4a5612a2b Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335189_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335190_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335190_y.jpg new file mode 100644 index 000000000..88357e545 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335190_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335191_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335191_y.jpg new file mode 100644 index 000000000..1550f92a9 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335191_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335192_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335192_y.jpg new file mode 100644 index 000000000..205f5cea5 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335192_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335193_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335193_y.jpg new file mode 100644 index 000000000..0e796d844 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335193_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335194_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335194_y.jpg new file mode 100644 index 000000000..13590c738 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335194_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335195_y.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335195_y.jpg new file mode 100644 index 000000000..c1c96d660 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6127648898429335195_y.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_6_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_6_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..8013a64b7 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_6_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_7_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_7_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..2dd2a661c Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_7_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_8_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_8_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..f49933efa Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_8_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Portrait/attachments/photo_9_2023-03-27_23-53-20.jpg b/content/photography/Aesthetic/Portrait/attachments/photo_9_2023-03-27_23-53-20.jpg new file mode 100644 index 000000000..53cffcce5 Binary files /dev/null and b/content/photography/Aesthetic/Portrait/attachments/photo_9_2023-03-27_23-53-20.jpg differ diff --git a/content/photography/Aesthetic/Style/Grainy_Green.md b/content/photography/Aesthetic/Style/Grainy_Green.md new file mode 100644 index 000000000..78cc39d6a --- /dev/null +++ b/content/photography/Aesthetic/Style/Grainy_Green.md @@ -0,0 +1,18 @@ +--- +title: Grainy Green +tags: +- photography +- grainy +- style +- share +--- + +![](photography/Aesthetic/Style/attachments/cinematicshine_326914596_601425291912114_4038822895364546166_n.jpg) + + +![](photography/Aesthetic/Style/attachments/cinematicshine_341207739_637183131584785_7839745357939483631_n.jpg) + + +# Reference + +* [https://www.instagram.com/p/CrGoBoeo8NF/](https://www.instagram.com/p/CrGoBoeo8NF/) \ No newline at end of file diff --git a/content/photography/Aesthetic/Style/Style_MOC.md b/content/photography/Aesthetic/Style/Style_MOC.md new file mode 100644 index 000000000..7135416fe --- /dev/null +++ b/content/photography/Aesthetic/Style/Style_MOC.md @@ -0,0 +1,11 @@ +--- +title: ☝Style +tags: +- photography +- style +- share +- MOC +--- + +* [🌅Warmth - Nguan](photography/Aesthetic/Style/Warmth_by_Nguan.md) +* [📗 Grainy Green](photography/Aesthetic/Style/Grainy_Green.md) diff --git a/content/photography/Aesthetic/Style/Warmth_by_Nguan.md b/content/photography/Aesthetic/Style/Warmth_by_Nguan.md new file mode 100644 index 000000000..c2350f073 --- /dev/null +++ b/content/photography/Aesthetic/Style/Warmth_by_Nguan.md @@ -0,0 +1,26 @@ +--- +title: 🎈Warmth - Nguan +tags: +- share +- photography +--- + +Credits to [Nguan](https://www.instagram.com/_nguan_/) + + +![](photography/Aesthetic/Style/attachments/167396766_118928406833773_7462235788758622009_n.jpg) + +![](photography/Aesthetic/Style/attachments/275801921_507726407459443_2779968335661218284_n.jpg) + +![](photography/Aesthetic/Style/attachments/275101252_116346090976633_4116581661408205933_n.jpg) + + +![](photography/Aesthetic/Style/attachments/152391470_356387755409221_8144178651765781801_n.jpg) + + +![](photography/Aesthetic/Style/attachments/153386473_426909131936316_8535520818773302544_n.jpg) + + +![](photography/Aesthetic/Style/attachments/156216827_337435770999537_8250898900544979316_n.jpg) + + diff --git a/content/photography/Aesthetic/Style/attachments/152391470_356387755409221_8144178651765781801_n.jpg b/content/photography/Aesthetic/Style/attachments/152391470_356387755409221_8144178651765781801_n.jpg new file mode 100644 index 000000000..7ba3132a6 Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/152391470_356387755409221_8144178651765781801_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/153386473_426909131936316_8535520818773302544_n.jpg b/content/photography/Aesthetic/Style/attachments/153386473_426909131936316_8535520818773302544_n.jpg new file mode 100644 index 000000000..2b89aaf26 Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/153386473_426909131936316_8535520818773302544_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/156216827_337435770999537_8250898900544979316_n.jpg b/content/photography/Aesthetic/Style/attachments/156216827_337435770999537_8250898900544979316_n.jpg new file mode 100644 index 000000000..8c358ae9e Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/156216827_337435770999537_8250898900544979316_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/167396766_118928406833773_7462235788758622009_n.jpg b/content/photography/Aesthetic/Style/attachments/167396766_118928406833773_7462235788758622009_n.jpg new file mode 100644 index 000000000..c925c1414 Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/167396766_118928406833773_7462235788758622009_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/275101252_116346090976633_4116581661408205933_n.jpg b/content/photography/Aesthetic/Style/attachments/275101252_116346090976633_4116581661408205933_n.jpg new file mode 100644 index 000000000..817f2069f Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/275101252_116346090976633_4116581661408205933_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/275801921_507726407459443_2779968335661218284_n.jpg b/content/photography/Aesthetic/Style/attachments/275801921_507726407459443_2779968335661218284_n.jpg new file mode 100644 index 000000000..c368f608b Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/275801921_507726407459443_2779968335661218284_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/cinematicshine_326914596_601425291912114_4038822895364546166_n.jpg b/content/photography/Aesthetic/Style/attachments/cinematicshine_326914596_601425291912114_4038822895364546166_n.jpg new file mode 100644 index 000000000..8c924caa8 Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/cinematicshine_326914596_601425291912114_4038822895364546166_n.jpg differ diff --git a/content/photography/Aesthetic/Style/attachments/cinematicshine_341207739_637183131584785_7839745357939483631_n.jpg b/content/photography/Aesthetic/Style/attachments/cinematicshine_341207739_637183131584785_7839745357939483631_n.jpg new file mode 100644 index 000000000..04e0f45fa Binary files /dev/null and b/content/photography/Aesthetic/Style/attachments/cinematicshine_341207739_637183131584785_7839745357939483631_n.jpg differ diff --git a/content/photography/Basic/MTF_Curve.md b/content/photography/Basic/MTF_Curve.md new file mode 100644 index 000000000..fc8896a0a --- /dev/null +++ b/content/photography/Basic/MTF_Curve.md @@ -0,0 +1,95 @@ +--- +title: Modulation transfer function(MTF) Curve +tags: +- photography +- basic +- lens +--- + +有很多因素影响lens performance: + +* diffraction +* optical aberrations +* design criteria and philosophy +* manufacturing tolerances and errors + +一般,可以用MTF Curve来作为一个标准来衡量lens performance + +> [!abstract] +> 本篇笔记会从摄影角度浅浅了解MTF曲线,而不从物理光学角度分析 + +# What is MTF Curve + + +调制传递函数 (MTF) 曲线是一种信息密集型指标(information-dense metric),反映了镜头如何*将对比度再现为空间频率(分辨率)的函数*。MTF Curve在一组设定好的基础参数下,提供一个composite view,关于光学像差([**optical aberrations**](physics/Optical/optical_abberation.md))如何影响镜头性能。 + +通过MTF图,我们可以知道 + +1. 分辨率, (*代表着镜头对细节的表现能力*) +2. 对比度, (*代表着镜头表现光线亮和暗的能力*) +3. 色散和横向色差 +4. 像场弯曲 + +不可以知道: + +1. 镜头畸变 +2. 径向色差 +3. 晕影 +4. 眩光 + +# How to measure MTF Curve + +大家应该知道,一个镜头的中心比边缘成像能力要好很多,因此只测试镜头的中心或边缘,是不能代表镜头的好坏的,所以厂家会从中心到边缘,选取多个点进行测试。如下图,尼康的全画幅机器,选取了距离中心5毫米,10mm,15mm,20mm的点测试。如果是APS-C画幅,因为感光元件小,会选取3mm,6mm,9mm,12mm等,不同厂家可能不一样。 + +![](photography/Basic/attachments/Pasted%20image%2020230424143258.png) + +测试方法一般使用白色背景、黑色直线 + +![](photography/Basic/attachments/Pasted%20image%2020230424143425.png) + +* **粗线**用来测试**对比度**,粗度为 10 lines/mm +* **细线**用来测试**分辨率**,粗度为 30 lines/mm +* 粗细各有两组,一组与半径平行,叫做Sagittal,另一组与半径垂直,叫做Meridonial,这样做主要是为了测试**色散**和**色差**的。 + +下图的成像质量是越来越差: + +![](photography/Basic/attachments/Pasted%20image%2020230424143543.png) + +# How to read MTF curve + +![](photography/Basic/attachments/Pasted%20image%2020230424143711.png) + +横坐标代表了到镜头中心的距离,纵坐标代表了对比度和分辨率的值。 + +最完美的镜头的曲线应该是下面这样的,一条红线一条蓝线, + +红线是通过**粗线**测试得到的,代表**对比度**; + +蓝线是通过**细线**测试得到的,代表**分辨率**。 + +![](photography/Basic/attachments/Pasted%20image%2020230424143940.png) + +普通的镜头的曲线应该是下面这样的(红线代表对比度,蓝线代表分辨率),在中心点,镜头的对比度和分辨率最好,越往边缘越差。 + +一般来讲,值大于0.9就代表镜头非常优秀,0.7-0.9是优秀,0.5-0.7就是普通,低于0.5就算差了。 + +注意到线的中级部位有呈波浪状,这表明了镜头的另一个参数素质:像场弯曲(curvature of field)\ + +有波浪就代表有像场弯曲,越大就越严重,实际情况一般问题不大。 + +![](photography/Basic/attachments/Pasted%20image%2020230424144046.png) + +最常见的MTF曲线如图: + +![](photography/Basic/attachments/Pasted%20image%2020230424144112.png) + +1. 红线,10lines/mm,也就是上面测试时说的粗线,用来测对比度的,从镜头中心到边缘,数值逐渐降低,表明镜头的对比度从镜头到边缘,逐渐降低。 +2. 分辨率,从中心到边缘逐渐降低 +3. 色散和色差 + * 测试时粗细都有两组线吗,一组与半径平行,另一组垂直,用来测试色散和色差,这样就分别得到两条线,与半径平行的一组得到实线,与半径垂直的一组得到虚线。**虚线实线越接近,代表镜头的色散和色差控制的很好,越背离,表示越严重**。 +4. 像场弯曲 + +# Reference + +* [The Modulation Transfer Function (MTF), https://www.edmundoptics.com](https://www.edmundoptics.com/knowledge-center/application-notes/imaging/modulation-transfer-function-mtf-and-mtf-curves/) +* [MTF 曲线图应该怎么看?, 知乎](https://www.zhihu.com/question/19713211) \ No newline at end of file diff --git a/content/photography/Basic/Saturation.md b/content/photography/Basic/Saturation.md new file mode 100644 index 000000000..3b3687485 --- /dev/null +++ b/content/photography/Basic/Saturation.md @@ -0,0 +1,8 @@ +--- +title: Saturation - 饱和度 +tags: +- basic +- photography +--- + +to be written... \ No newline at end of file diff --git a/content/photography/Basic/attachments/DGE{QUQ2G`9TD8NE18J3J@T.png b/content/photography/Basic/attachments/DGE{QUQ2G`9TD8NE18J3J@T.png new file mode 100644 index 000000000..7511669b2 Binary files /dev/null and b/content/photography/Basic/attachments/DGE{QUQ2G`9TD8NE18J3J@T.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424140836.png b/content/photography/Basic/attachments/Pasted image 20230424140836.png new file mode 100644 index 000000000..67c5dd297 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424140836.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424143258.png b/content/photography/Basic/attachments/Pasted image 20230424143258.png new file mode 100644 index 000000000..1b4dd41f9 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424143258.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424143425.png b/content/photography/Basic/attachments/Pasted image 20230424143425.png new file mode 100644 index 000000000..e584cbe85 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424143425.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424143543.png b/content/photography/Basic/attachments/Pasted image 20230424143543.png new file mode 100644 index 000000000..6527c2f7a Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424143543.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424143711.png b/content/photography/Basic/attachments/Pasted image 20230424143711.png new file mode 100644 index 000000000..a2aba7320 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424143711.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424143940.png b/content/photography/Basic/attachments/Pasted image 20230424143940.png new file mode 100644 index 000000000..ddd9c5673 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424143940.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424144046.png b/content/photography/Basic/attachments/Pasted image 20230424144046.png new file mode 100644 index 000000000..2b4e441a5 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424144046.png differ diff --git a/content/photography/Basic/attachments/Pasted image 20230424144112.png b/content/photography/Basic/attachments/Pasted image 20230424144112.png new file mode 100644 index 000000000..d937677b5 Binary files /dev/null and b/content/photography/Basic/attachments/Pasted image 20230424144112.png differ diff --git a/content/photography/Cameras_Research/Lens_Structure/Lens_Structure_MOC.md b/content/photography/Cameras_Research/Lens_Structure/Lens_Structure_MOC.md new file mode 100644 index 000000000..a8e8e76e5 --- /dev/null +++ b/content/photography/Cameras_Research/Lens_Structure/Lens_Structure_MOC.md @@ -0,0 +1,10 @@ +--- +title: Lens Structure MOC +tags: +- photography +- lens +- MOC +- review +--- + +* \ No newline at end of file diff --git a/content/photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md b/content/photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md new file mode 100644 index 000000000..3fe6b0e03 --- /dev/null +++ b/content/photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md @@ -0,0 +1,12 @@ +--- +title: Pocket Film camera MOC +tags: +- photography +- review +- MOC +- camera +--- + +# Rollei + +* [Rollei35](photography/Cameras_Research/Pocket_film/Rollei_35.md) \ No newline at end of file diff --git a/content/photography/Cameras_Research/Pocket_film/Rollei_35.md b/content/photography/Cameras_Research/Pocket_film/Rollei_35.md new file mode 100644 index 000000000..38905ce86 --- /dev/null +++ b/content/photography/Cameras_Research/Pocket_film/Rollei_35.md @@ -0,0 +1,11 @@ +--- +title: Rollei 35 review +tags: +- photography +- rollei35 +- rollei +- camera +- review +--- + + diff --git a/content/photography/Cameras_Research/Polaroid/Polaroid.md b/content/photography/Cameras_Research/Polaroid/Polaroid.md new file mode 100644 index 000000000..24225cca6 --- /dev/null +++ b/content/photography/Cameras_Research/Polaroid/Polaroid.md @@ -0,0 +1,25 @@ +--- +title: Polaroid +tags: +- camera +- photography +- MOC +- polaroid +--- + +# Polaroid Background + +![](photography/Cameras_Research/Polaroid/attachments/Pasted%20image%2020230330195031.png) + +Polaroid是一家成立于1937年的美国相机及照片制造公司,该公司曾经是即时相机市场的领导者。Polaroid公司在20世纪50年代推出了第一台即时相机,并在随后的几十年里不断推出各种型号的即时相机和胶片,成为了全球广泛使用的品牌。 + +Polaroid最著名的特点之一是它的“即时影像”技术,这种技术可以使用户在拍摄后几秒钟内看到他们所拍摄的照片。Polaroid的即时相机成为了许多人记录重要时刻和创造独特艺术作品的选择。 + +除了即时相机,Polaroid还生产和销售其他相机、相机附件、数码相框和照片打印机等产品。此外,Polaroid还与其他品牌合作,推出了许多联名款式的相机和其他产品。 + +在Polaroid成立近90年的历史中,它的相机和胶片已经成为了文化和艺术的象征,并继续影响着人们对摄影和影像创作的认知。 + +# Polaroid Camera Review + +* [Polaroid one600](photography/Cameras_Research/Polaroid/Polaroid_one600.md) +* [Polaroid Integral 600 Series](photography/Cameras_Research/Polaroid/Polaroid_600.md) diff --git a/content/photography/Cameras_Research/Polaroid/Polaroid_600.md b/content/photography/Cameras_Research/Polaroid/Polaroid_600.md new file mode 100644 index 000000000..f0904a283 --- /dev/null +++ b/content/photography/Cameras_Research/Polaroid/Polaroid_600.md @@ -0,0 +1,13 @@ +--- +title: Polaroid 600 +tags: +- polaroid +- camera +- review +- photography +--- + +# Reference + +* [How do I use my Vintage Polaroid 600 camera? – Retrospekt](https://retrospekt.com/blogs/ask-the-expert/how-do-i-use-my-vintage-polaroid-600-instant-camera) +* [Polaroid Integral 600 Series - Camera-wiki.org - The free camera encyclopedia](http://camera-wiki.org/wiki/Polaroid_Integral_600_Series) diff --git a/content/photography/Cameras_Research/Polaroid/Polaroid_one600.md b/content/photography/Cameras_Research/Polaroid/Polaroid_one600.md new file mode 100644 index 000000000..21a0c4c11 --- /dev/null +++ b/content/photography/Cameras_Research/Polaroid/Polaroid_one600.md @@ -0,0 +1,46 @@ +--- +title: # Polaroid One 600 Camera Review +tags: +- camera +- photography +- review +- polaroid +--- + + +![](photography/Cameras_Research/Polaroid/attachments/Pasted%20image%2020230330195707.png) + +# Specifications + +- **(Wide) 100mm lens with minimum focus distance of 3 feet.** +- **Maximum Aperture F12.9 (Don't know if it can change)** +- **1/200 s to 1/3 s** +- **Fixed focus.** +- Exposure modes - **Program automatic** +- "Aerodynamic" styling (particularly when folded) with downward curve at back. +- Flash moved to right hand side of user and can be manually switched on and off. +- Hand grip on right. +- LCD frame counter. +- Self-timer. + +## Functionally similar models + +- Polaroid One (silver/grey) +- Polaroid One600 Job Pro (black/silver/yellow) (Close-focus to 18 inches!) +- Polaroid One600 Nero (all black) +- Polaroid One600 "Flowers" (white with purple and yellow flower design) +- Polaroid One600 Panna (white/black) +- Polaroid One600 "Poison Frog" (silver/grey with yellow/black pattern) +- Polaroid One600 Polala 2006 (red/silver with gold Chinese dragon) +- Polaroid One600 Pro (all silver) (Like Job Pro, close-focus to 18 inches!) +- Polaroid One600 Royksopp (grey/silver with 'Royksopp - Only This Moment' branding) +- Polaroid One600 Superheadz Special Edition Red Hat (silver/black, with 'red hat' cartoon character) +- Polaroid One600 Rossa (bright red/black) +- Polaroid One Rossa (as above) +- Polaroid One Ultra (silver/black) (Close focus to 2 feet) +- Polaroid Pop Kit (silver/black with stickers for user's customization) + +# Reference + +* [Polaroid One 600 Camera Review - by Dan Finnen](https://danfinnen.com/review/polaroid-one-600-camera-review/) +* [Polaroid One600 (Classic) - Camera-wiki.org - The free camera encyclopedia](http://camera-wiki.org/wiki/Polaroid_One600_(Classic)) diff --git a/content/photography/Cameras_Research/Polaroid/attachments/Pasted image 20230330195031.png b/content/photography/Cameras_Research/Polaroid/attachments/Pasted image 20230330195031.png new file mode 100644 index 000000000..04da89d18 Binary files /dev/null and b/content/photography/Cameras_Research/Polaroid/attachments/Pasted image 20230330195031.png differ diff --git a/content/photography/Cameras_Research/Polaroid/attachments/Pasted image 20230330195707.png b/content/photography/Cameras_Research/Polaroid/attachments/Pasted image 20230330195707.png new file mode 100644 index 000000000..320bcba0d Binary files /dev/null and b/content/photography/Cameras_Research/Polaroid/attachments/Pasted image 20230330195707.png differ diff --git a/content/photography/MoodBoard/Sea_20230428/Sea_20230428.md b/content/photography/MoodBoard/Sea_20230428/Sea_20230428.md new file mode 100644 index 000000000..e04f967be --- /dev/null +++ b/content/photography/MoodBoard/Sea_20230428/Sea_20230428.md @@ -0,0 +1,10 @@ +--- +title: 🌊Sea - 2023.04.28 +tags: +- moodboard +- photography +- landscape +--- + + +* [idea - reference image](photography/MoodBoard/Sea_20230428/idea.md) diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/photono_gen_336060179_2380745882102401_2427706248624984378_n.jpg b/content/photography/MoodBoard/Sea_20230428/attachments/photono_gen_336060179_2380745882102401_2427706248624984378_n.jpg new file mode 100644 index 000000000..7c492be05 Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/photono_gen_336060179_2380745882102401_2427706248624984378_n.jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg new file mode 100644 index 000000000..c1458466b Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n (1).jpg b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n (1).jpg new file mode 100644 index 000000000..e08a294b8 Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n (1).jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg new file mode 100644 index 000000000..e08a294b8 Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg new file mode 100644 index 000000000..6627b41f9 Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n (1).jpg b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n (1).jpg new file mode 100644 index 000000000..a8a8e5534 Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n (1).jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg new file mode 100644 index 000000000..a8a8e5534 Binary files /dev/null and b/content/photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg differ diff --git a/content/photography/MoodBoard/Sea_20230428/idea.md b/content/photography/MoodBoard/Sea_20230428/idea.md new file mode 100644 index 000000000..473b43f0b --- /dev/null +++ b/content/photography/MoodBoard/Sea_20230428/idea.md @@ -0,0 +1,45 @@ +--- +title: idea - reference image +tags: +- photography +- moodboard +- idea +--- + +# [Fujifilm_Blue_by_小红书_Philips谢骏](photography/Aesthetic/Landscape/Sea/Fujifilm_Blue_by_小红书_Philips谢骏.md) + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014349.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014354.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014401.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014613.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014622.png) + + +![](photography/Aesthetic/Landscape/Sea/attachments/Pasted%20image%2020230420014634.png) + +# [豊島_Instagram_shiifoncake](photography/Aesthetic/Landscape/Sea/豊島_Instagram_shiifoncake.md) + +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338949220_771246770941652_287141902256013940_n.jpg) + +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n%20(1).jpg) + +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_339164445_155642070453847_6842139942547564019_n.jpg) + +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n%20(1).jpg) + +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338803198_1141886276488589_5464974698780309052_n.jpg) + +![](photography/MoodBoard/Sea_20230428/attachments/shiifoncake_338758486_601356648715316_3737336679741136784_n.jpg) + + +# [寄り道の理由。- Instagram, photono_gen](https://www.instagram.com/p/CrVPFjZvvlr/) + +![](photography/MoodBoard/Sea_20230428/attachments/photono_gen_336060179_2380745882102401_2427706248624984378_n.jpg) \ No newline at end of file diff --git a/content/photography/Photography_MOC.md b/content/photography/Photography_MOC.md new file mode 100644 index 000000000..90af443ef --- /dev/null +++ b/content/photography/Photography_MOC.md @@ -0,0 +1,80 @@ +--- +title: "Photography - MOC" +tags: +- MOC +- photography +--- + +# 🌊Photo Portfolio +You can see my photography works in: + +* [🎨Slide show](https://pinkr1ver.com/PhotoGallery/) +* [🌄Photo Collection in Notion](https://pinkr1ver.notion.site/3cfdd332b9a94b20bca041f2aa2bdcd2?v=24e696e6ab754386a710bc8e83976357&pvs=4) +* [🍻Instagram](https://www.instagram.com/jude.wang.yc/?next=%2F) +* [🧶小红书](https://www.xiaohongshu.com/user/profile/6272c025000000002102353b) + +# Notes +Also, here's my notes about learning photography + +## About Basic Concepts: + +* [Saturation](photography/Basic/Saturation.md) + +## Appreciation of other works - about ***aesthetic*** + +* [👧Portrait](photography/Aesthetic/Portrait/Portrait_MOC.md) +* [🏔Landscape](photography/Aesthetic/Landscape/Landscape_MOC.md) +* [☝Style](photography/Aesthetic/Style/Style_MOC.md) +* [✨Polaroid](photography/Aesthetic/Polaroid/Polaroid_aesthetic_MOC.md) + +## Camera Research + +* [✨Polaroid](photography/Cameras_Research/Polaroid/Polaroid.md) +* [📷Lens Structure](photography/Cameras_Research/Lens_Structure/Lens_Structure_MOC.md) +* [📸Pocket film camera](photography/Cameras_Research/Pocket_film/Pocket_film_camera_MOC.md) + +## Skills I learned + +* [How to measure light using Polaroid?](photography/Skills/Polaroid_light.md) +* [How to use Moodboard](photography/Skills/Moodboard.md) +* [How to show your Polaroid Picture](photography/Aesthetic/Polaroid/Polaroid_showcase.md) + +## Photography story + +* [夜爬蛤蟆峰拍Polaroid慢门 - 2023.04.14](photography/Story/Rainy_evening_hiking_Polaroid.md) + +## Mood Board + +* [🌊Sea - 2023.04.28](photography/MoodBoard/Sea_20230428/Sea_20230428.md) + +## Meme + +* [Photography meme](photography/Photography_meme/Photography_meme.md) + + +# Reference + +## Platform + +* [Magnum Photos](https://www.magnumphotos.com/) +* [CNU - Catch Next Ultimate](http://www.cnu.cc/) + +## Greatest Artist + +* [linksphotograph](https://www.linksphotograph.com/) +* [HAMADA Hideaki / 濱田英明](https://www.hideakihamada.com) +* [Jason Kummerfeldt](https://graincheck.darkroom.com/) and [his youtube](https://www.youtube.com/@grainydaysss) +* [Nguan](https://nguan.tv/) +* [Marta Bevacqua](https://www.martabevacquaphotography.com/) +* [Sam Zhang](https://www.instagram.com/itscapturedbysam/) + +## Content Collector & Photographer + +* [🦺搬运UP主 - 豆腐素包](https://space.bilibili.com/196700312/video) +* [小八怪 - 小红书](https://www.xiaohongshu.com/user/profile/5558b47f5894463d532a632c) + + +# Photography Resume + + + diff --git a/content/photography/Photography_meme/Photography_meme.md b/content/photography/Photography_meme/Photography_meme.md new file mode 100644 index 000000000..8d404ba6c --- /dev/null +++ b/content/photography/Photography_meme/Photography_meme.md @@ -0,0 +1,10 @@ +--- +title: Photography Meme +tags: + - photography + - meme + - film + - happy +--- + +![](photography/Photography_meme/attachments/QQ图片20230424193512.png) \ No newline at end of file diff --git a/content/photography/Photography_meme/attachments/QQ图片20230424193512.png b/content/photography/Photography_meme/attachments/QQ图片20230424193512.png new file mode 100644 index 000000000..5c6e0ca8b Binary files /dev/null and b/content/photography/Photography_meme/attachments/QQ图片20230424193512.png differ diff --git a/content/photography/Photography_meme/attachments/T66RNLJN[]R@2F5G]9%25ZY.png b/content/photography/Photography_meme/attachments/T66RNLJN[]R@2F5G]9%25ZY.png new file mode 100644 index 000000000..5c6e0ca8b Binary files /dev/null and b/content/photography/Photography_meme/attachments/T66RNLJN[]R@2F5G]9%25ZY.png differ diff --git a/content/photography/Skills/Moodboard.md b/content/photography/Skills/Moodboard.md new file mode 100644 index 000000000..628930e42 --- /dev/null +++ b/content/photography/Skills/Moodboard.md @@ -0,0 +1,54 @@ +--- +title: How to use Moodboard +tags: +- photography +- skill +--- + +# Overview + +1. 选题 +2. 风格 +3. 色彩 +4. 服装道具 +5. 模特 +6. 场地 +7. 构图 +8. 布光 + +# 选题 + +将参考图放进灵感文件夹 + +# 风格 + +在参考图中风格提取,一般可以收集200张参考图 + +# 色彩 + +使用[Adobe Color](https://color.adobe.com/)确定色彩方案 + + +# 服装道具 + +略 + +# 模特 + +略 + +# 场地 + +略 + +# 构图 + +使用参考图和手绘 + +# 布光 + +略 + +# Reference + +* [要做出完美的拍摄策划,必须知道的8个重点 - 小红书, Tripitaka Wu](https://www.xiaohongshu.com/user/profile/6272c025000000002102353b/62024914000000002103cedf) \ No newline at end of file diff --git a/content/photography/Skills/Polaroid_light.md b/content/photography/Skills/Polaroid_light.md new file mode 100644 index 000000000..c22997763 --- /dev/null +++ b/content/photography/Skills/Polaroid_light.md @@ -0,0 +1,22 @@ +--- +title: How to measure light using Polaroid? +tags: +- photography +- Polaroid +- film +- skill +--- + +The most thing you need to know is that, **the right exposure is in your head**. + +# Basic + + + +# Practice + + +# Reference + +* [How to EXPOSE your POLAROID PICTURE - Youtuber Analog Things](https://www.youtube.com/watch?v=iqU5YRG8WiE) + diff --git a/content/photography/Skills/howToShowPolaroid.md b/content/photography/Skills/howToShowPolaroid.md new file mode 100644 index 000000000..d5138a151 --- /dev/null +++ b/content/photography/Skills/howToShowPolaroid.md @@ -0,0 +1,9 @@ +--- +title: How to Show Polaroid? +tags: + - Polaroid + - photography + - skill +--- + +* [宝丽来翻拍9宫格](photography/Aesthetic/Polaroid/Polaroid_showcase.md) \ No newline at end of file diff --git a/content/photography/Story/Rainy_evening_hiking_Polaroid.md b/content/photography/Story/Rainy_evening_hiking_Polaroid.md new file mode 100644 index 000000000..c6e87bd75 --- /dev/null +++ b/content/photography/Story/Rainy_evening_hiking_Polaroid.md @@ -0,0 +1,80 @@ +--- +title: 夜爬蛤蟆峰拍Polaroid慢门 - 2023.04.14 +tags: +- photography +- Polaroid +- story +- film +--- + +# Hiking + +周五,周潭来杭,计划去蛤蟆峰顶拍拍立得慢门,记录西湖夜景。 + +晚饭后,雨渐起,兴致不减,亦去。 + +山底已经在小雨中颇有丁达尔现象的感觉。 + +![](photography/Story/attachments/9970714720C0835E6547C263418D551B.jpg) + +雨让石头逐渐变得打滑,蛤蟆峰山顶的石头快的攀登会变得非常危险,这一点难以描述,或许你可以问你杭州本地的朋友。周潭在攀登最后一段路程之前摔倒,还好背包缓冲了几乎所有的冲撞,也让他意识到雨天来到这里的危险性,是具有极限运动的底色在的。 + +最后,在小心翼翼中,登顶了。 + +# Photographer + +在蛤蟆峰顶拍慢门需要一定的三脚架架设技巧和测光技巧,在雨中就显得更加困难。 + +![](photography/Story/attachments/QQ视频20230416012046.mp4) + +![](photography/Story/attachments/FCB8B96468D3B459532E010E865D0B99.jpg) + + +经过测光和宝丽来app曝光调整,这次拍摄夜景的计划以$f/22$, 30s shutter speed, i-type film 640 ISO进行拍摄,先看成片效果: + +![](photography/Story/attachments/IMG_5553.jpg) + +照片由iPhone 12 mini Polaroid app scanner扫描完成的film -> digital,效果比较一般,但我们能看出,曝光的效果不尽人意。这里的原因认为由以下原因导致: +* 天气恶劣,空气湿度大,造成光线色散加重 +* 没有考虑i-type相纸**倒易率**,曝光时间不足(重要原因🚧🚧🚧) +* 没有查询Polaroid now+镜头最好的光学素质的参数,自认为是$f/22$,导致曝光时间过长导致的点光源色散严重。(目前还没有查询到Polaroid now+镜头的光学参数曲线🚧🚧🚧) + +同时,那晚还不懂now+ +键的使用导致相纸浪费一张,下面是now+中+键的用法: + +![](photography/Story/attachments/Pasted%20image%2020230416014050.png) + +同时,那晚曝光时,有一次光圈不小心打到$f/33$,导致欠曝地更为厉害,其效果大概如下: + +![](photography/Story/attachments/IMG_5550.jpg) + +同时要注意的是,Polaroid的曝光时间最多是30s,如果要更长时间的曝光,可以不弹相纸进行二次曝光,但是长曝光30s以上可能效果很差。 + +## 人像 + +搞了两张人像,同样的曝光参数$f/22$, 30s shutter speed, i-type film 640 ISO,开了宝丽来闪关灯最大等级: + +![](photography/Story/attachments/IMG_5492.jpg) + + +![](photography/Story/attachments/IMG_5493.jpg) + +第一张人像清晰些,以我个人观点来看,是因为伞造成的反射 + +# 返程 + +返程的故事有趣了些,因为三脚架落在了山脚,于是返回去取,但是去取的时候手机落在了打的网约车上。 + +因为手机丢了,所以无法确认订单的详细信息,也就无法联系司机和客服。 + +所以用周潭的手机去登录高德打车去确认订单,但是登录高德又需要手机验证码,非常傻逼的设计就是了,还好带了apple watch,连接了周潭的热点后可以同步手机消息收到验证码,这才登上了高德地图。 + +高德地图的打车是联动多家打车公司的,所以情况错综复杂,我的订单来自天猫出行,高德端不允许我直接联系师傅,同时天猫出行的客服也无法拨通,最后还好拨通了高德客服的电话,联系上了师傅。 + +师傅当时前往了滨江,所以我们只好在山脚下,也就是保俶路的忠儿面馆那里等待,刚好周潭没有吃饱,起源巧合下,在这也算吃了一顿还算杭州特色的拌川。 + +![](photography/Story/attachments/A9A6699D1859851AB1D66131BD1382DC.jpg) + + +# Route + +![](photography/Story/attachments/QQ图片20230417203443.jpg) \ No newline at end of file diff --git a/content/photography/Story/attachments/9970714720C0835E6547C263418D551B.jpg b/content/photography/Story/attachments/9970714720C0835E6547C263418D551B.jpg new file mode 100644 index 000000000..6469efc48 Binary files /dev/null and b/content/photography/Story/attachments/9970714720C0835E6547C263418D551B.jpg differ diff --git a/content/photography/Story/attachments/A9A6699D1859851AB1D66131BD1382DC.jpg b/content/photography/Story/attachments/A9A6699D1859851AB1D66131BD1382DC.jpg new file mode 100644 index 000000000..cca7947f6 Binary files /dev/null and b/content/photography/Story/attachments/A9A6699D1859851AB1D66131BD1382DC.jpg differ diff --git a/content/photography/Story/attachments/FCB8B96468D3B459532E010E865D0B99.jpg b/content/photography/Story/attachments/FCB8B96468D3B459532E010E865D0B99.jpg new file mode 100644 index 000000000..e35bef265 Binary files /dev/null and b/content/photography/Story/attachments/FCB8B96468D3B459532E010E865D0B99.jpg differ diff --git a/content/photography/Story/attachments/IMG_5492.jpg b/content/photography/Story/attachments/IMG_5492.jpg new file mode 100644 index 000000000..ac5a4f19f Binary files /dev/null and b/content/photography/Story/attachments/IMG_5492.jpg differ diff --git a/content/photography/Story/attachments/IMG_5493.jpg b/content/photography/Story/attachments/IMG_5493.jpg new file mode 100644 index 000000000..de386dfa5 Binary files /dev/null and b/content/photography/Story/attachments/IMG_5493.jpg differ diff --git a/content/photography/Story/attachments/IMG_5550.jpg b/content/photography/Story/attachments/IMG_5550.jpg new file mode 100644 index 000000000..28deca352 Binary files /dev/null and b/content/photography/Story/attachments/IMG_5550.jpg differ diff --git a/content/photography/Story/attachments/IMG_5553.jpg b/content/photography/Story/attachments/IMG_5553.jpg new file mode 100644 index 000000000..94e28571e Binary files /dev/null and b/content/photography/Story/attachments/IMG_5553.jpg differ diff --git a/content/photography/Story/attachments/Pasted image 20230416014050.png b/content/photography/Story/attachments/Pasted image 20230416014050.png new file mode 100644 index 000000000..94d2bc111 Binary files /dev/null and b/content/photography/Story/attachments/Pasted image 20230416014050.png differ diff --git a/content/photography/Story/attachments/QQ图片20230417203443.jpg b/content/photography/Story/attachments/QQ图片20230417203443.jpg new file mode 100644 index 000000000..60c729b23 Binary files /dev/null and b/content/photography/Story/attachments/QQ图片20230417203443.jpg differ diff --git a/content/photography/Story/attachments/QQ视频20230416012046.mp4 b/content/photography/Story/attachments/QQ视频20230416012046.mp4 new file mode 100644 index 000000000..e11a89f13 Binary files /dev/null and b/content/photography/Story/attachments/QQ视频20230416012046.mp4 differ diff --git a/content/photography/resume.md b/content/photography/resume.md new file mode 100644 index 000000000..5882989da --- /dev/null +++ b/content/photography/resume.md @@ -0,0 +1,16 @@ +--- +title: Photography Resume +tags: + - resume + - photography +--- +
+ + +

Jude Wang

+ +
+ diff --git a/content/physics/Electromagnetism/Basic/Electric_units.md b/content/physics/Electromagnetism/Basic/Electric_units.md new file mode 100644 index 000000000..269b49feb --- /dev/null +++ b/content/physics/Electromagnetism/Basic/Electric_units.md @@ -0,0 +1,79 @@ +--- +title: Electric Units +tags: +- ciruit +- basic +- physics +- electric +--- +# Electrical impedance + +$$ +Z = \sqrt{R^2 + {(X_L-X_C)}^2} +$$ + + +* $Z$ = impedance +* $R$ = resistance +* $X_L$ = inductive reactance +* $X_C$ = capacitive reactance + +![](physics/Electromagnetism/Basic/attachments/Pasted%20image%2020230330163734.png) + +**阻抗**是电路中电阻、电感、电容对交流电的阻碍作用的统称。阻抗是一个复数,实部称为**电阻**,虚部称为**电抗**;其中电容在电路中对交流电所起的阻碍作用称为**容抗**,电感在电路中对交流电所起的阻碍作用称为**感抗**,容抗和感抗合称为**电抗**。 + +阻抗将电阻的概念加以延伸至交流电路领域,不仅描述*电压与电流的相对振幅*,也描述其*相对相位*。当通过电路的电流是直流电时,电阻与阻抗相等,电阻可以视为相位为零的阻抗。 + +## 形式 + +1. $R+jX$ +2. $Z_m\angle\theta$ +3. $Z_m e^{j\theta}$ + +阻抗定义为电压与电流的频域比率。阻抗的大小$Z_{m}$ 是电压振幅与电流振幅的绝对值比率,阻抗的相位 $\theta$是电压与电流的相位差。 + +## 欧姆定律 + +$$ +v = iZ = iZ_m e^{j\theta} +$$ + +阻抗大小$Z_m$的作用恰巧就像电阻,设定电流$i$,就可以计算出阻抗$Z$两端的电压降$v$。相位因子$e^{j\theta}$则是电流滞后于电压的相位差$\theta$ + +> [!tip] +> 在时域中,电流信号会比电压信号慢$\theta T/2\pi$秒 + +## 理想的阻抗 +$$ +Z_R = R +$$ + +$$ +Z_C = \frac{1}{j\omega C} +$$ + +$$ +Z_L = j \omega L +$$ + +* 对于电容,交流电压滞后90°于交流电流; +* 对于电感,交流电压超前90°于交流电流 + +### 容抗 + +$$ +X_C = -j/\omega C +$$ +随着$\omega$趋向于0,电源趋向于直流电源,容抗的绝对值趋向于无穷;*因此,在低频率运作时,电容器貌似断路。假设电源的频率越高,则容抗越低,对于电流通过的阻碍也越低。在高频率运作时,电容器貌似短路。* + +### 阻抗 + +$$ +X_L = j\omega L +$$ +从这方程可以观察到,当交流电源的角频率趋向于零时,电源会趋向于直流电源,感抗会趋向于零,对于电流的通过阻碍越低。*所以,在低频率运作时,电感器貌似短路。假设电源角频率越高,则感抗越高,假设给定电压源振幅,则电流会趋向于零。所以,在高频率运作时,电感器貌似断路。* + + +# Reference + +[电气单位(V,A,Ω,W,...) (rapidtables.org)](https://www.rapidtables.org/zh-CN/electric/Electric_units.html) diff --git a/content/physics/Electromagnetism/Basic/attachments/Pasted image 20230330163734.png b/content/physics/Electromagnetism/Basic/attachments/Pasted image 20230330163734.png new file mode 100644 index 000000000..054eac17d Binary files /dev/null and b/content/physics/Electromagnetism/Basic/attachments/Pasted image 20230330163734.png differ diff --git a/content/physics/Electromagnetism/Basic/attachments/Pasted image 20230330165822.png b/content/physics/Electromagnetism/Basic/attachments/Pasted image 20230330165822.png new file mode 100644 index 000000000..b6890fdb5 Binary files /dev/null and b/content/physics/Electromagnetism/Basic/attachments/Pasted image 20230330165822.png differ diff --git a/content/physics/Electromagnetism/Electromagnetism_MOC.md b/content/physics/Electromagnetism/Electromagnetism_MOC.md new file mode 100644 index 000000000..a450f365c --- /dev/null +++ b/content/physics/Electromagnetism/Electromagnetism_MOC.md @@ -0,0 +1,19 @@ +--- +title: Electromagnetism MOC +tags: +- physics +- MOC +- electromagnetism +--- + +# Basic + +* [Electric units](physics/Electromagnetism/Basic/Electric_units.md) + +## Advanced + +* [Maxwell's equation](physics/Electromagnetism/Maxwells_equation.md) + +# Circuit + +* [Resonant circuit](physics/Electromagnetism/Resonant_circuit.md) \ No newline at end of file diff --git a/content/physics/Electromagnetism/Maxwells_equation.md b/content/physics/Electromagnetism/Maxwells_equation.md new file mode 100644 index 000000000..a5644f725 --- /dev/null +++ b/content/physics/Electromagnetism/Maxwells_equation.md @@ -0,0 +1,195 @@ +--- +title: Maxwell's Equation +tags: +- physics +- electromagnetism +- nuclear-level-knowledge +--- + +# Equation + + +$$ +\nabla \cdot E = \frac{\rho}{\epsilon_0} +$$ + +$$ +\nabla \cdot B = 0 +$$ + +$$ +\nabla \times E = -\frac{\partial B}{\partial t} +$$ + +$$ +\nabla \times B = \mu_0 (J + \epsilon_0 \frac{\partial E}{\partial t}) +$$ + +# Vector field + +Essentially a vector field is what you get if you associate each point in space with a vector, some magnitude and direction. Maybe those vectors represent the velocities of particles of fluid at each point in space or maybe they represent the force of gravity at many different points in space or maybe a magnetic field strength. + +> [!note] +> If you were to draw the vectors to scale, the longer ones end up just cluttering the whole thing, so it's common to basically lie a little and artificially shorten ones that are too long. Maybe using **color to give some vague sense of length**. + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411151612.png) + +## Divergence + +![](physics/Electromagnetism/attachments/my-life.gif) + +Divergence $\cdot$ Vector filed是来衡量在(x, y)点你产生fluid的能力 + +所以上述图中,产生fluid的source点,他们的Divergence $\cdot$ Vector filed是positive的 + +那些fluid流入的sink端,他们的Divergence $\cdot$ Vector filed就是negative的 + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411155711.png) + +同时,如果点可以slow flow in变fast slow out,这个点位的divergence $\cdot$ vector filed也是positive的 + +![](physics/Electromagnetism/attachments/my-life%201.gif) + +Vector field input point得到的是一个多维的输出,指向一个方向并带有scale;divergence $\cdot$ vector field,它的输出depends on the behavior of the field in small neighborhood around that point。输出为一个数值,衡量这个point acts as a source or a sink + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411161346.png) + +> [!note] +> For actual fluid flow: $\text{div} F = 0$ everywhere + +## Curl + +![](physics/Electromagnetism/attachments/output%202.gif) + +Curl是衡量fluid在point被rotate的程度,clockwise方向是positive curl,counterclockwise是negative curl。 + +![](physics/Electromagnetism/attachments/curl.gif) + +上图中这个点的curl也是非零的,因为fluid上快下慢,result in clockwise influence + +## Calculate divergence and curl + +$$ +\text{div} F = \nabla \cdot F = +\begin{bmatrix} +\frac{\partial}{\partial x} \\ +\frac{\partial}{\partial y} +\end{bmatrix} \cdot +\begin{bmatrix} +F_x \\ +F_y +\end{bmatrix} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} +$$ + +$$ +\text{curl} F = \nabla \times F = +\begin{bmatrix} +\frac{\partial}{\partial x} \\ +\frac{\partial}{\partial y} +\end{bmatrix} \times +\begin{bmatrix} +F_x \\ +F_y +\end{bmatrix} += \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} +$$ + +![](physics/Electromagnetism/attachments/calculation_result.gif) + +### Detail Explanation + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230412144351.png) + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230412144501.png) + +在$(x_0, y_0)$微分一个很小的tiny step,会有一个新的vector,它与原有的vector会有一个difference。 + +![](physics/Electromagnetism/attachments/div.gif) + +$\text{div} F(x_0, y_0)$其实就是corresponds to $360^\circ$方向的average的Step $\cdot$ Difference + +可以想象一个source端,它朝四面发射vector,它的Step $\cdot$ Difference自然就是positive的 + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230412145732.png) + +同理,不难想象的是,$\text{curl} F(x_0, y_0)$是corresponds to Step $\times$ Difference + +# Understand Maxwell's Equation + +学会vector filed中的divergence和curl,是理解Maxwell’s Equation的关键 + +## Gauss's Law + +$$ +\text{div} E = \frac{\rho}{\epsilon_0} +$$ + + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411163735.png) + +* $\rho$是charge density +* $\epsilon_0$是Epsilon Naught,free space的介电常数,它决定free space空间中电场的强度 + +> [!note] +> 形象的 +> +> Gauss's law stating that **divergence of an electric field at a given point is a proportional to the charge density at that point**. +> +> **Positively charged regions as acting like sources** of some imagined fluid and n**egatively charged regions as being the sinks** of that fluid. +> +> Parts of space where there is on charge the fluid **would be flowing incompressively** just like water. + + +## Gauss's law for magnetism + +$$ +\text{div} B = 0 +$$ + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230411165048.png) + +磁场的divergence在任意地方为0,说明磁场的fluid是incompressible的,没有source也没有sinks,就像water一样。也有这样的interpretation,说明magnetic monopoles是不存在的 + +## Maxwell–Faraday equation (Faraday's law of induction) + +$$ +\nabla \times E = - \frac{1}{c} \frac{\partial B}{\partial t} +$$ + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419141438.png) + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419141637.png) +## Ampère's circuital law (with Maxwell's addition) + +$$ +\nabla \times B = \frac{1}{c} (4\pi J + \frac{\partial E}{\partial t}) +$$ + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419141737.png) + + +# Maxwells equation explain EM wave + +Maxwells的完备对称理论表明,电场力和磁力并不是分开的,而是同一事物——电磁力的不同表现形式。 这种力的经典统一是当前试图统一自然界中四种基本力——引力、电力、强核力和弱核力——的动机之一。 + +Maxwells从Maxwells equation中预测了EM wave的存在。 + +Maxwells意识到振荡电荷,就像交流电路中的电荷一样,会产生变化的电场。 他预测这些变化的场会像跳跃的鱼在湖上产生的波浪一样从源头传播。 + +麦克斯韦预测的波将由振荡电场和磁场组成——定义为电磁波(EM 波)。 电磁波能够对距其源很远的电荷施加力,因此它们可能是可检测的。 Maxwells通过求解Maxwells方程组,可以求出EM的速度$c$, + +$$ +c = \frac{1}{\sqrt{\mu_0 \epsilon_0}} = 3 \times 10^8 m/s +$$ + +电磁波的波段处于无法被肉眼观测的波段,直到Maxwells去世后,才被Hertz用实验证实了电磁波的存在。 + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230419155744.png) + +# Reference + +* [Fun fluid-flow illustrations - by 3B1B](https://anvaka.github.io/fieldplay/?cx=0&cy=0&w=8.5398&h=8.5398&dt=0.01&fo=0.998&dp=0.009&cm=1&vf=%2F%2F%20p.x%20and%20p.y%20are%20current%20coordinates%0A%2F%2F%20v.x%20and%20v.y%20is%20a%20velocity%20at%20point%20p%0Avec2%20get_velocity%28vec2%20p%29%20%7B%0A%20%20vec2%20v%20%3D%20vec2%280.%2C%200.%29%3B%0A%0A%20%20%2F%2F%20change%20this%20to%20get%20a%20new%20vector%20field%0A%20%20v.x%20%3D%20p.y%3B%0A%20%20v.y%20%3D%20%28max%28cos%28sin%28p.y%29%29%2Csin%28p.y%29%2Fp.y%29%2Bp.y%29%3B%0A%0A%20%20return%20v%3B%0A%7D&code=%2F%2F%20p.x%20and%20p.y%20are%20current%20coordinates%0A%2F%2F%20v.x%20and%20v.y%20is%20a%20velocity%20at%20point%20p%0Avec2%20get_velocity%28vec2%20p%29%20%7B%0A%20%20vec2%20v%20%3D%20vec2%280.%2C%200.%29%3B%0A%0A%20%20%2F%2F%20change%20this%20to%20get%20a%20new%20vector%20field%0A%20%20v.x%20%3D%20%28max%28p.x%2Cp.y%29%2Bmax%28p.y%2Cp.x%29%29%3B%0A%20%20v.y%20%3D%20p.y%3B%0A%0A%20%20return%20v%3B%0A%7D) +* [Divergence and curl: The language of Maxwell's equations, fluid flow, and more - YouTube vedio by 3b1b](https://www.youtube.com/watch?v=rB83DpBJQsE) +* [Let There Be Light: Maxwell's Equation EXPLAINED for BEGINNERS - YouTube vedio by Parth G](https://www.youtube.com/watch?v=0jW74lrpeM0) +* [Faraday’s Law - online experiment](https://em.geosci.xyz/content/maxwell1_fundamentals/formative_laws/faraday.html) +* [# Maxwell’s Equations- Electromagnetic Waves Predicted and Observed](https://phys.libretexts.org/Bookshelves/College_Physics/Book%3A_College_Physics_1e_(OpenStax)/24%3A_Electromagnetic_Waves/24.01%3A_Maxwells_Equations-_Electromagnetic_Waves_Predicted_and_Observed) \ No newline at end of file diff --git a/content/physics/Electromagnetism/Q_factor.md b/content/physics/Electromagnetism/Q_factor.md new file mode 100644 index 000000000..b51950464 --- /dev/null +++ b/content/physics/Electromagnetism/Q_factor.md @@ -0,0 +1,62 @@ +--- +title: Q factor +tags: +- physics +- electric +- electromagnetism +- basic +--- + +# Explanation + +In physics and engineering, the quality factor or Q factor is a **dimensionless** parameter that describes how **underdamped** an oscillator or *resonator* is. It is defined as the ratio of the initial energy stored in the resonator to the *energy lost* in one radian of the cycle of oscillation. Q factor is alternatively defined as the ratio of a *resonator's center frequency to its bandwidth* when subject to an oscillating driving force. These two definitions give *numerically similar*, but not identical, results. + +> [!tip] +> 高Q因子表示振子能量损失的速率较慢,振动可持续较长的时间; 单摆在空气中Q因子较高而在油中较低 + + + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230404144801.png)Fig. A damped oscillation. A low Q factor – about 5 here – means the oscillation dies out rapidly. + + +Q因子较高的振子在共振时,在共振频率附近的**振幅较大**,但会产生的共振的**频率范围比较小**,此频率范围可以称为频宽。 + +例如一台无线电接收器内的调谐电路Q因子较高,要调整接收器对准一特定频率会比较困难,但其选择性较好,在过滤频谱上邻近电台的讯号上也有较佳的效果。 + +系统的Q因子可能会随著应用场合及需求的不同而有大幅的差异。*强调阻尼特性的系统*(例如[防止门突然关闭的阻尼器](warehouse/dampers_keeping_a_door_from_slamming%20shut.md))*其Q因子为1⁄2*,而时钟、雷射或是其他需要强烈共振或是要求频率稳定性的系统其Q因子也较高。音叉的Q因子大约为1000,原子钟、加速器中的超导射频或是光学共振腔的Q因子可以到$10^{11}$ + +> [!help] +> There are many *alternative quantities* used by physicists and engineers to describe how damped an oscillator is. Important examples include: the [damping ratio](https://en.wikipedia.org/wiki/Damping_ratio "Damping ratio"), [relative bandwidth](https://en.wikipedia.org/wiki/Bandwidth_(signal_processing) "Bandwidth (signal processing)"), [linewidth](https://en.wikipedia.org/wiki/Oscillator_linewidth "Oscillator linewidth") and bandwidth measured in [octaves](https://en.wikipedia.org/wiki/Octave_(electronics) "Octave (electronics)"). + + +# Definition + +![](physics/Electromagnetism/attachments/Pasted%20image%2020230404151254.png) + +Fig. 一阻尼谐振子的频宽, $\Delta f$可以用频率和能量的图来表示。阻尼谐振子(或滤波器)的Q因子为$f_{0}/\Delta f$。Q因子越大,其波峰高度会越高,而其宽度会越窄 + +In the context of resonators, there are two common definitions for Q, which aren't exactly equivalent. They become approximately equivalent *as Q becomes larger*, meaning the resonator becomes less damped. + +## Bandwidth definition + +$$Q\stackrel{def}{=}\frac{f_r}{\Delta f}=\frac{\omega_r}{\Delta \omega}$$ + +$f_r$为共振频率,$\Delta f$为频宽,一般是 [full width at half maximum](https://en.wikipedia.org/wiki/Full_width_at_half_maximum "Full width at half maximum") (FWHM) + +## Stored energy definition + +Q因子可定义为在一系统的共振频率下,当信号振幅不随时间变化时,**系统储存能量和每个周期外界所提供能量的比例**(此时系统储存能量也不随时间变化) + +$$Q = 2\pi \times \frac{\text{Energy Stored}}{\text{Energy dissipated per cycle}}=2\pi f_r \times \frac{\text{Energy Stored}}{\text{Power Loss}}$$ + +同时在像电感等储能元件的规格中,会用到和频率有关的Q因子,其定义如下 + +$$Q(\omega) = \omega \times \frac{\text{Maximum Energy Stored}}{\text{Power Loss}}$$ + +其中$\omega$是计算储存能量和功率损失时的角频率 + + +# Reference + +* [Q factor in wiki](https://en.wikipedia.org/wiki/Q_factor) +* [品质因子](https://zh.wikipedia.org/zh-hans/%E5%93%81%E8%B3%AA%E5%9B%A0%E5%AD%90#:~:text=%E5%93%81%E8%B4%A8%E5%9B%A0%E5%AD%90%E6%88%96Q%E5%9B%A0%E5%AD%90,%E6%91%86Q%E5%9B%A0%E5%AD%90%E8%BE%83%E4%BD%8E%E3%80%82) \ No newline at end of file diff --git a/content/physics/Electromagnetism/Resonant_circuit.md b/content/physics/Electromagnetism/Resonant_circuit.md new file mode 100644 index 000000000..c53eeec44 --- /dev/null +++ b/content/physics/Electromagnetism/Resonant_circuit.md @@ -0,0 +1,52 @@ +--- +title: Resonant circuit +tags: +- physics +- electric +--- + +以RLC串联电路为例 + +# 什么是谐振 + +电路中电容器$L$、电感器$C$两组件之能量相等,当能量由电路中某一电抗组件释出时,且另一电抗组件必吸收相同之能量,即此两电抗组件间会产生一能量脉动。 + +# 两种简单的谐振电路 +![](synthetic_aperture_radar_imaging/attachments/Pasted%20image%2020230330160535.png) + + +以串联谐振为例 + +## *Resonant Frequency* + +电容,电阻的[电抗](physics/Electromagnetism/Basic/Electric_units.md#Electrical%20impedance)相同时发生谐振 + +$$ +|X_C| = |\frac{1}{j2\pi fC}| = |X_L| = |j2\pi fL| +$$ +Rearranging, + +$$ +f^2 = \frac{1}{(2\pi)^2 C L} +$$ + +$$ +f = \frac{1}{2\pi \sqrt{LC}} +$$ + +## 串联谐振特性 + +* 阻抗最小,且为纯电阻,$Z = R+jXL-jXC = R$ + +## **品质因子** ([*Q factor*](physics/Electromagnetism/Q_factor.md)) + +* 电感器或电容器在谐振时产生的电抗功率与电阻器消耗的平均功率之比,称为谐振时之品质因子。 + +$$Q=\frac{Q_L}{P}=\frac{I^2X_L}{I^2R}=\frac{Q_C}{P}=\frac{I^2X_C}{I^2R}=\frac{1}{R}\sqrt{\frac{L}{C}}=\frac{\sqrt{X_LX_C}}{R}$$ + +#### 阻抗与频率的关系 + +$Z = R + j(X_L-X_C)$ +* 当$f=f_r$时,$Z=R$为最小值,电路为电阻性; +* 当$f>f_r$时,$X_L>X_C$为最小值,电路为电感性; +* 当$f<f_r$时,$X_L<X_C$为最小值,电路为电容性。 diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230404144801.png b/content/physics/Electromagnetism/attachments/Pasted image 20230404144801.png new file mode 100644 index 000000000..7b426ca60 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230404144801.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230404151254.png b/content/physics/Electromagnetism/attachments/Pasted image 20230404151254.png new file mode 100644 index 000000000..313060ad0 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230404151254.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230411150141.png b/content/physics/Electromagnetism/attachments/Pasted image 20230411150141.png new file mode 100644 index 000000000..98ee74bab Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230411150141.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230411151612.png b/content/physics/Electromagnetism/attachments/Pasted image 20230411151612.png new file mode 100644 index 000000000..9ac30bcb1 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230411151612.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230411155711.png b/content/physics/Electromagnetism/attachments/Pasted image 20230411155711.png new file mode 100644 index 000000000..fd8bdfcb5 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230411155711.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230411161346.png b/content/physics/Electromagnetism/attachments/Pasted image 20230411161346.png new file mode 100644 index 000000000..7669a628b Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230411161346.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230411163735.png b/content/physics/Electromagnetism/attachments/Pasted image 20230411163735.png new file mode 100644 index 000000000..b480b4e43 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230411163735.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230411165048.png b/content/physics/Electromagnetism/attachments/Pasted image 20230411165048.png new file mode 100644 index 000000000..2f8ef79f8 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230411165048.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230412144351.png b/content/physics/Electromagnetism/attachments/Pasted image 20230412144351.png new file mode 100644 index 000000000..09e16fb97 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230412144351.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230412144501.png b/content/physics/Electromagnetism/attachments/Pasted image 20230412144501.png new file mode 100644 index 000000000..76f753088 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230412144501.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230412145732.png b/content/physics/Electromagnetism/attachments/Pasted image 20230412145732.png new file mode 100644 index 000000000..aef860acd Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230412145732.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230419141438.png b/content/physics/Electromagnetism/attachments/Pasted image 20230419141438.png new file mode 100644 index 000000000..125289049 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230419141438.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230419141637.png b/content/physics/Electromagnetism/attachments/Pasted image 20230419141637.png new file mode 100644 index 000000000..5f9c25b7d Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230419141637.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230419141737.png b/content/physics/Electromagnetism/attachments/Pasted image 20230419141737.png new file mode 100644 index 000000000..722f67140 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230419141737.png differ diff --git a/content/physics/Electromagnetism/attachments/Pasted image 20230419155744.png b/content/physics/Electromagnetism/attachments/Pasted image 20230419155744.png new file mode 100644 index 000000000..2338ed416 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/Pasted image 20230419155744.png differ diff --git a/content/physics/Electromagnetism/attachments/calculation_result.gif b/content/physics/Electromagnetism/attachments/calculation_result.gif new file mode 100644 index 000000000..e661e69c8 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/calculation_result.gif differ diff --git a/content/physics/Electromagnetism/attachments/curl.gif b/content/physics/Electromagnetism/attachments/curl.gif new file mode 100644 index 000000000..8726b2a41 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/curl.gif differ diff --git a/content/physics/Electromagnetism/attachments/div.gif b/content/physics/Electromagnetism/attachments/div.gif new file mode 100644 index 000000000..ffd1e4c4f Binary files /dev/null and b/content/physics/Electromagnetism/attachments/div.gif differ diff --git a/content/physics/Electromagnetism/attachments/my-life 1.gif b/content/physics/Electromagnetism/attachments/my-life 1.gif new file mode 100644 index 000000000..2a0fdebff Binary files /dev/null and b/content/physics/Electromagnetism/attachments/my-life 1.gif differ diff --git a/content/physics/Electromagnetism/attachments/my-life.gif b/content/physics/Electromagnetism/attachments/my-life.gif new file mode 100644 index 000000000..115691a61 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/my-life.gif differ diff --git a/content/physics/Electromagnetism/attachments/output 2.gif b/content/physics/Electromagnetism/attachments/output 2.gif new file mode 100644 index 000000000..08119f823 Binary files /dev/null and b/content/physics/Electromagnetism/attachments/output 2.gif differ diff --git a/content/physics/Optical/attachments/Fig_1_Circles_of_confusion.gif b/content/physics/Optical/attachments/Fig_1_Circles_of_confusion.gif new file mode 100644 index 000000000..442bc8862 Binary files /dev/null and b/content/physics/Optical/attachments/Fig_1_Circles_of_confusion.gif differ diff --git a/content/physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif b/content/physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif new file mode 100644 index 000000000..eb75a0d01 Binary files /dev/null and b/content/physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif differ diff --git a/content/physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif b/content/physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif new file mode 100644 index 000000000..6f3c6bb93 Binary files /dev/null and b/content/physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif differ diff --git a/content/physics/Optical/attachments/Pasted image 20230424110844.png b/content/physics/Optical/attachments/Pasted image 20230424110844.png new file mode 100644 index 000000000..aec0ceb78 Binary files /dev/null and b/content/physics/Optical/attachments/Pasted image 20230424110844.png differ diff --git a/content/physics/Optical/attachments/Pasted image 20230424111226.png b/content/physics/Optical/attachments/Pasted image 20230424111226.png new file mode 100644 index 000000000..c830f65df Binary files /dev/null and b/content/physics/Optical/attachments/Pasted image 20230424111226.png differ diff --git a/content/physics/Optical/attachments/Pasted image 20230424112159.png b/content/physics/Optical/attachments/Pasted image 20230424112159.png new file mode 100644 index 000000000..b8dcf54f9 Binary files /dev/null and b/content/physics/Optical/attachments/Pasted image 20230424112159.png differ diff --git a/content/physics/Optical/attachments/Pasted image 20230424113453.png b/content/physics/Optical/attachments/Pasted image 20230424113453.png new file mode 100644 index 000000000..28f1cebcc Binary files /dev/null and b/content/physics/Optical/attachments/Pasted image 20230424113453.png differ diff --git a/content/physics/Optical/attachments/Pasted image 20230424113838.png b/content/physics/Optical/attachments/Pasted image 20230424113838.png new file mode 100644 index 000000000..c5b528d3d Binary files /dev/null and b/content/physics/Optical/attachments/Pasted image 20230424113838.png differ diff --git a/content/physics/Optical/optical_abberation.md b/content/physics/Optical/optical_abberation.md new file mode 100644 index 000000000..918a92783 --- /dev/null +++ b/content/physics/Optical/optical_abberation.md @@ -0,0 +1,97 @@ +--- +title: Optical Abberation +tags: +- optical +- photography +- basic +--- + +# What is optical aberration + +光学像差是指镜头设计中的缺陷,它会导致光线散开而不是聚焦以形成清晰的图像。 范围从图像中的所有光线到只有某些点或边缘失焦。 成像时可能会出现几种类型的光学像差。 构建一个校正了所有可能像差的理想视觉系统会显着增加镜头的成本。 实际上,镜头中总会存在某种形式的像差,但将像差的影响降至最低至关重要。 因此,制造任何镜头通常都会做出一些妥协。 + +# Circle of confusion + +要解释像差如何使图像模糊,首先要解释一下:什么是混淆圈? 当来自目标的光点到达镜头,然后会聚在传感器上时,它会很清晰。 否则,如果它在传感器之前或之后会聚,则传感器上的光分布会更广。 这可以在图 1 中看到,其中可以看到点光源会聚在传感器上,但随着传感器位置的变化,沿传感器散布的光量也会发生变化。 + +![](physics/Optical/attachments/Fig_1_Circles_of_confusion.gif) + +光线越分散,图像的焦点就越少。 除非光圈很小,否则图像中彼此距离较大的目标通常会使背景或前景失焦。 这是因为会聚在前景中的光与来自背景中较远目标的光会聚在不同的点。 + +# Types of Optical Aberration + +## Coma(慧差) + + +彗形像差,又称彗星像差,此种像差的分布形状以类似于彗星的拖尾而得名。 + +![](physics/Optical/attachments/Pasted%20image%2020230424110844.png) + +这是一些透镜固有的或是光学设计造成的缺点,导致离开光轴的点光源,例如恒星,产生变形。特别是,彗形像差被定义为偏离入射光孔的放大变异。在折射或衍射的光学系统,特别是在宽光谱范围的影像中,彗形像差是波长的函数。 + +## Astigmatism (像散) + +在两个垂直平面中传播的光线在聚焦于不同点时可能会产生像散。 + +这可以在图 3 中看到,其中两个焦点由红色水平面和蓝色垂直面表示。 图像中的最佳清晰度点将在这两个点之间,其中任一平面的混淆圈都不太宽。 + +![](physics/Optical/attachments/Pasted%20image%2020230424111226.png) + +当光学器件未对准时,散光会导致图像的侧面和边缘失真。 它通常被描述为在查看图像中的线条时缺乏清晰度。 + +这种形式的像差可以使用大多数优质光学器件中的适当透镜设计来校正。 固定散光的光学元件的最初设计是由卡尔蔡司完成的,并且已经发展了一百多年。 在这一点上,它通常只出现在质量非常低的镜头中,或者内部光学元件已损坏或通过镜头滴移动的情况下。 + +## (Petzval) Field Curvature (场曲) + +许多镜头都有圆形的焦点。 这会导致图像出现柔和的角,主要是使图像的中心保持在焦点上。 然而,大多数镜头都有一些圆形的焦点,如果不进行一些裁剪,就无法聚焦整个图像。 + +场曲是图像平面由于多个焦点而变得不平坦的结果。 + +![](physics/Optical/attachments/Pasted%20image%2020230424112159.png) + +相机镜头已在很大程度上纠正了这一点,但在许多镜头上可能会发现一些场曲。 一些传感器制造商实际上正在研究可以校正弯曲焦点区域的弯曲传感器。 这种设计将允许传感器校正像差,而不需要以这种精度生产昂贵的镜头设计。 通过实施这种类型的传感器,可以使用更便宜的镜头来产生高质量的结果。 这方面的真实例子可以在开普勒太空天文台看到,那里使用弯曲的传感器阵列来校正望远镜的大型球面光学元件。 + +## Distortion (畸变) + +畸变是指当一物体通过Lens系统成像时,会产生一种对物体不同部分有不同的放大率的像差,此种像差会导致物像的相似性变坏。但不影响像的清晰度。 根据对物体周边及中心有放大率的差异此种像差可分为两类: + +### Barrel distortion (桶形畸变) + +具有桶形失真的图像的边缘和侧面远离中心弯曲。 这在视觉上看起来像是图像中有一个凸起,因为它捕获了弯曲视场 (FoV, field of view) 的外观。 例如,当在高层建筑的高处使用较低焦距的镜头(也称为广角镜头)时,可以捕捉到更宽的 FoV。 如图 5 所示,使用产生非常扭曲和宽 FoV 的鱼眼镜头时,这种情况最为夸张。在此图像中,网格线用于帮助说明失真效果如何在靠近侧面的地方向外产生更拉伸的图像, 边缘。 + +![](physics/Optical/attachments/Pasted%20image%2020230424113453.png) + + +### Pincushion distortion (枕型畸变) + +当光线通过枕形畸变向光轴弯曲时,图像看起来会向内拉伸。 因此,图像的边缘和侧面看起来会向图像的中心弯曲。 + +这种形式的像差最常见于焦距较长的远摄镜头。 + +![](physics/Optical/attachments/Pasted%20image%2020230424113838.png) + +### Mustache distortion + +**小胡子畸变**😂是枕形失真和桶形失真的组合。 这会导致图像的内部向外弯曲,而图像的外部向内弯曲。 小胡子失真是一种相当罕见的像差,其中不止一种失真模式会影响图像。 小胡子畸变通常是镜头设计非常糟糕的标志,因为这是导致像差融合的光学错误的高潮。 + + +## Chromatic (位置色差) + +### Longitudinal / axial aberration + +光的颜色代表特定波长的光。 由于折射,彩色图像将有多个波长进入镜头并聚焦在不同的点。 纵向或轴向色差是由不同波长聚焦在沿光轴的不同点引起的。 波长越短,其焦点将离镜头越近,而波长越远,则反之,离镜头越远,如图 8 所示。通过引入较小的孔径,进入的光仍可能聚焦在不同的位置 点,但“混淆圈”的宽度(直径)会小得多,导致不那么剧烈的模糊。 + +![](physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif) + +### Transverse / lateral aberration + +导致不同波长沿图像平面分布的离轴光是横向或横向色差。 这会导致图像中主体边缘出现彩色边纹。 这比纵向色差更难校正。 + +![](physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif) + +它可以使用引入不同折射率的消色差双合透镜来固定。 通过将可见光谱的两端置于一个焦点上,可以消除色边。 对于横向和纵向色差,减小光圈的大小也有帮助。 此外,在高对比度环境(即具有非常亮的背景的图像)中不成像目标可能是有益的。 在显微镜中,镜头可能使用复消色差透镜 (APO) 而不是消色差透镜,消色差透镜使用三个透镜元件来校正入射光的所有波长。 当颜色最重要时,确保减轻色差将产生最佳效果。 + +# Reference + +* [SIX OPTICAL ABERRATIONS THAT COULD BE IMPACTING YOUR VISION SYSTEM, https://www.lumenera.com](https://www.lumenera.com/blog/six-optical-aberrations-that-could-be-impacting-your-vision-system) +* [光学像差重要知识点详解|光学经典理论, 知乎 - 监控李誉](https://zhuanlan.zhihu.com/p/40149006) \ No newline at end of file diff --git a/content/physics/Physics_MOC.md b/content/physics/Physics_MOC.md new file mode 100644 index 000000000..51b417525 --- /dev/null +++ b/content/physics/Physics_MOC.md @@ -0,0 +1,10 @@ +--- +title: Physics MOC +tags: +- physics +- MOC +--- + +# Electromagnetism + +* [Electromagnetism MOC](physics/Electromagnetism/Electromagnetism_MOC.md) \ No newline at end of file diff --git a/content/physics/Wave/Doppler_Effect.md b/content/physics/Wave/Doppler_Effect.md new file mode 100644 index 000000000..02a8a186c --- /dev/null +++ b/content/physics/Wave/Doppler_Effect.md @@ -0,0 +1,47 @@ +--- +title: Doppler Effect +tags: +- physics +- basic +- wave +--- + +多普勒效应(**Doppler effect**)是波源和观察者有相对运动时,观察者接受到波的频率与波源发出的频率并不相同的现象。 + +远方急驶过来的火车鸣笛声变得尖细(即频率变高,波长变短),而离我们而去的火车鸣笛声变得低沉(即频率变低,波长变长),就是多普勒效应的现象,同样现象也发生在汽车鸣响与火车的敲钟声。 + +# General + +在classical physics中,source的speed和receiver的speed远小于wave在medium中的移动速度,observed frequency $f$和emitted frequency$f_0$关系: + +$$ +f = (\frac{c \pm v_r}{c \pm v_s})f_0 +$$ +* $c$是wave在介质中的速度 +* $v_r$是receiver相对于介质的速度,如果receiver向source移动,则分子为加号,反之为减号 +* $v_s$是source相对于介质的速度,如果source远离receiver,则分母为加号,反之为减号 + +> [!note] +> 请注意,此关系预测如果源或接收器中的任何一个远离另一个,频率将会降低。 + +$$ +\frac{f}{v_{wr}} = \frac{f_0}{v_{ws}} = \frac{1}{\lambda} +$$ +* $v_{\omega r}$是wave speed相对于receiver +* $v_{\omega s}$是wave speed相对于source +* $\lambda$是波长 + +## Example + +![](physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif) + +其中$v_s = 0.7c$,波前开始在源的右侧(前面)聚集,并在源的左侧(后面)进一步分开。 + +在前面的receiver会听到higher frequency,也就是$f = \frac{c}{c-0.7c}f_0 = 3.33f_0$;后面的receiver会听到lower frequency,也就是$f = \frac{c}{c + 0.7c}f_0 = 0.59f_0$ + + + + +# Reference + +* [多普勒效应 - Wiki](https://zh.wikipedia.org/wiki/%E5%A4%9A%E6%99%AE%E5%8B%92%E6%95%88%E5%BA%94) \ No newline at end of file diff --git a/content/physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif b/content/physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif new file mode 100644 index 000000000..d5bb9a9c5 Binary files /dev/null and b/content/physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif differ diff --git a/content/physics/Wave/attachments/Pasted image 20230418153538.png b/content/physics/Wave/attachments/Pasted image 20230418153538.png new file mode 100644 index 000000000..c9145108e Binary files /dev/null and b/content/physics/Wave/attachments/Pasted image 20230418153538.png differ diff --git a/content/report/2023.04.16 天线测试.md b/content/report/2023.04.16 天线测试.md new file mode 100644 index 000000000..2c7d8fdaa --- /dev/null +++ b/content/report/2023.04.16 天线测试.md @@ -0,0 +1,50 @@ + + 对天线进行测距能力的测试 + +# 背景 + +![](report/attachments/96251ac46494ab01294e570e352c426.png) + +# 测试结果 + +## 无穷远距离测量 + +前方30cm内无反射,超出本雷达测距能力极限,近似为无穷远距离内无反射,得到收集端电压 + +![](report/attachments/7983094eb03d1dcc285edf9c1768018.png) + +以前的天线收集的数据: + +![](report/attachments/f5d557933b15f8ea7f6861f70663d13.png) + +问题在于两点: + +* 目前天线稳定性不足 +* 核心信号峰值下降为1.7v左右,而之前核心信号为2.2v + +## 实时测距实验 + +*实时测距实验为在天线段实时测量信号并在前面按照时间放置金属挡板检测天线的测距能力。* + +实验大致的放置时间为: +1. 0-25s,不放置金属挡板 +2. 25-50s,金属挡板贴紧天线 +3. 50-75s,不放置金属挡板 +4. 75-100s,在10cm处放置金属挡板 +5. 100-125s,不放置金属挡板 +6. 125-150s,在20cm处放置金属挡板 +7. 175-200s,不放置金属挡板 +8. 150-175s,在30cm处放置金属挡板 + +新天线收集数据: + +![](report/attachments/abaec3368e16f2c9be67b5edbba39be.png) + +旧天线收集信号: + +![](report/attachments/ac4c5aa53392835d3db04a78e73476b.png) + +问题在于: + +* 新天线信号不稳定,与无穷远测试中的结果吻合。 +* 导致了不同距离的信号区分度丧失 diff --git a/content/report/attachments/2477544fc674d675ebb328cba3a74b1.png b/content/report/attachments/2477544fc674d675ebb328cba3a74b1.png new file mode 100644 index 000000000..b60fd4445 Binary files /dev/null and b/content/report/attachments/2477544fc674d675ebb328cba3a74b1.png differ diff --git a/content/report/attachments/7983094eb03d1dcc285edf9c1768018 1.png b/content/report/attachments/7983094eb03d1dcc285edf9c1768018 1.png new file mode 100644 index 000000000..cc28ed8a7 Binary files /dev/null and b/content/report/attachments/7983094eb03d1dcc285edf9c1768018 1.png differ diff --git a/content/report/attachments/7983094eb03d1dcc285edf9c1768018.png b/content/report/attachments/7983094eb03d1dcc285edf9c1768018.png new file mode 100644 index 000000000..cc28ed8a7 Binary files /dev/null and b/content/report/attachments/7983094eb03d1dcc285edf9c1768018.png differ diff --git a/content/report/attachments/96251ac46494ab01294e570e352c426.png b/content/report/attachments/96251ac46494ab01294e570e352c426.png new file mode 100644 index 000000000..037b8009d Binary files /dev/null and b/content/report/attachments/96251ac46494ab01294e570e352c426.png differ diff --git a/content/report/attachments/abaec3368e16f2c9be67b5edbba39be.png b/content/report/attachments/abaec3368e16f2c9be67b5edbba39be.png new file mode 100644 index 000000000..df31e2ec3 Binary files /dev/null and b/content/report/attachments/abaec3368e16f2c9be67b5edbba39be.png differ diff --git a/content/report/attachments/ac4c5aa53392835d3db04a78e73476b.png b/content/report/attachments/ac4c5aa53392835d3db04a78e73476b.png new file mode 100644 index 000000000..3545fe7ef Binary files /dev/null and b/content/report/attachments/ac4c5aa53392835d3db04a78e73476b.png differ diff --git a/content/report/attachments/f5d557933b15f8ea7f6861f70663d13.png b/content/report/attachments/f5d557933b15f8ea7f6861f70663d13.png new file mode 100644 index 000000000..61fb0da4d Binary files /dev/null and b/content/report/attachments/f5d557933b15f8ea7f6861f70663d13.png differ diff --git a/content/signal_processing/envelope/hilbert_transform.md b/content/signal_processing/envelope/hilbert_transform.md index df37cb825..41db5aa22 100644 --- a/content/signal_processing/envelope/hilbert_transform.md +++ b/content/signal_processing/envelope/hilbert_transform.md @@ -75,7 +75,7 @@ $$ ![](signal_processing/envelope/attachments/Pasted%20image%2020240102150350.png) -The Hilbert transform is given by the [Cauchy principal value](Math/real_analysis/cauchy_principal_value.md) of the convolution with the function $1/(\pi t)$. +The Hilbert transform is given by the [Cauchy principal value](math/real_analysis/cauchy_principal_value.md) of the convolution with the function $1/(\pi t)$. ## Geometrical meaning of HT diff --git a/content/synthetic_aperture_radar_imaging/Antenna.md b/content/synthetic_aperture_radar_imaging/Antenna.md index 82d1d665d..8adb39082 100644 --- a/content/synthetic_aperture_radar_imaging/Antenna.md +++ b/content/synthetic_aperture_radar_imaging/Antenna.md @@ -8,7 +8,7 @@ tags: # Theorem you need know -* [🧷Resonant circuit](Physics/Electromagnetism/Resonant_circuit.md) +* [🧷Resonant circuit](physics/Electromagnetism/Resonant_circuit.md) # What is antenna @@ -117,11 +117,11 @@ radiation要考虑两个方面,一方面激发电场那边提供的电子的 > pulse就像你丢了一颗石头下去,弦波就像你按照周期去丢 > [!hint] -> 根据[Maxwell's equations](Physics/Electromagnetism/Maxwells_equation.md) +> 根据[Maxwell's equations](physics/Electromagnetism/Maxwells_equation.md) > > 当电磁波在导线中存在的时候,它是需要时变的电流或者说是加减速的电荷来support。在传输线里,需要source才能有场; > -> 但是在解[Maxwell's equations](Physics/Electromagnetism/Maxwells_equation.md)的时候,是有一组homogeneous的解,这组解指的是,你不需要source的存在的场,这个场指的是free-space wave; +> 但是在解[Maxwell's equations](physics/Electromagnetism/Maxwells_equation.md)的时候,是有一组homogeneous的解,这组解指的是,你不需要source的存在的场,这个场指的是free-space wave; > > 所以,天线本质上就是一个interface,将导线内需要source的场,变成不需要source的场,也就是free-space wave diff --git a/content/synthetic_aperture_radar_imaging/Chirp.md b/content/synthetic_aperture_radar_imaging/Chirp.md index b1f410d83..2c335bf60 100644 --- a/content/synthetic_aperture_radar_imaging/Chirp.md +++ b/content/synthetic_aperture_radar_imaging/Chirp.md @@ -96,7 +96,7 @@ $$ ## Hyperbolic -双曲线线性调频用于雷达应用,因为它们在被多普勒效应([Doppler Effect](Physics/Wave/Doppler_Effect.md))扭曲后显示出最大的匹配滤波器([Matched filter](https://en.wikipedia.org/wiki/Matched_filter))响应。 +双曲线线性调频用于雷达应用,因为它们在被多普勒效应([Doppler Effect](physics/Wave/Doppler_Effect.md))扭曲后显示出最大的匹配滤波器([Matched filter](https://en.wikipedia.org/wiki/Matched_filter))响应。 signal frequency: