diff --git a/content/Physics/Electromagnetism/attachments/my-life 1.gif b/content/Physics/Electromagnetism/attachments/my-life 1.gif
new file mode 100644
index 000000000..2a0fdebff
Binary files /dev/null and b/content/Physics/Electromagnetism/attachments/my-life 1.gif differ
diff --git a/content/Physics/Electromagnetism/attachments/my-life.gif b/content/Physics/Electromagnetism/attachments/my-life.gif
new file mode 100644
index 000000000..115691a61
Binary files /dev/null and b/content/Physics/Electromagnetism/attachments/my-life.gif differ
diff --git a/content/Physics/Electromagnetism/attachments/output 2.gif b/content/Physics/Electromagnetism/attachments/output 2.gif
new file mode 100644
index 000000000..08119f823
Binary files /dev/null and b/content/Physics/Electromagnetism/attachments/output 2.gif differ
diff --git a/content/Physics/Optical/attachments/Fig_1_Circles_of_confusion.gif b/content/Physics/Optical/attachments/Fig_1_Circles_of_confusion.gif
new file mode 100644
index 000000000..442bc8862
Binary files /dev/null and b/content/Physics/Optical/attachments/Fig_1_Circles_of_confusion.gif differ
diff --git a/content/Physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif b/content/Physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif
new file mode 100644
index 000000000..eb75a0d01
Binary files /dev/null and b/content/Physics/Optical/attachments/Fig_8_Chromatic_abberation_animation.gif differ
diff --git a/content/Physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif b/content/Physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif
new file mode 100644
index 000000000..6f3c6bb93
Binary files /dev/null and b/content/Physics/Optical/attachments/Fig_9_Chromatic_aberration_lateral.gif differ
diff --git a/content/Physics/Optical/attachments/Pasted image 20230424110844.png b/content/Physics/Optical/attachments/Pasted image 20230424110844.png
new file mode 100644
index 000000000..aec0ceb78
Binary files /dev/null and b/content/Physics/Optical/attachments/Pasted image 20230424110844.png differ
diff --git a/content/Physics/Optical/attachments/Pasted image 20230424111226.png b/content/Physics/Optical/attachments/Pasted image 20230424111226.png
new file mode 100644
index 000000000..c830f65df
Binary files /dev/null and b/content/Physics/Optical/attachments/Pasted image 20230424111226.png differ
diff --git a/content/Physics/Optical/attachments/Pasted image 20230424112159.png b/content/Physics/Optical/attachments/Pasted image 20230424112159.png
new file mode 100644
index 000000000..b8dcf54f9
Binary files /dev/null and b/content/Physics/Optical/attachments/Pasted image 20230424112159.png differ
diff --git a/content/Physics/Optical/attachments/Pasted image 20230424113453.png b/content/Physics/Optical/attachments/Pasted image 20230424113453.png
new file mode 100644
index 000000000..28f1cebcc
Binary files /dev/null and b/content/Physics/Optical/attachments/Pasted image 20230424113453.png differ
diff --git a/content/Physics/Optical/attachments/Pasted image 20230424113838.png b/content/Physics/Optical/attachments/Pasted image 20230424113838.png
new file mode 100644
index 000000000..c5b528d3d
Binary files /dev/null and b/content/Physics/Optical/attachments/Pasted image 20230424113838.png differ
diff --git a/content/Physics/Optical/optical_abberation.md b/content/Physics/Optical/optical_abberation.md
new file mode 100644
index 000000000..e9a0255f4
--- /dev/null
+++ b/content/Physics/Optical/optical_abberation.md
@@ -0,0 +1,97 @@
+---
+title: Optical Abberation
+tags:
+- optical
+- photography
+- basic
+---
+
+# What is optical aberration
+
+光学像差是指镜头设计中的缺陷,它会导致光线散开而不是聚焦以形成清晰的图像。 范围从图像中的所有光线到只有某些点或边缘失焦。 成像时可能会出现几种类型的光学像差。 构建一个校正了所有可能像差的理想视觉系统会显着增加镜头的成本。 实际上,镜头中总会存在某种形式的像差,但将像差的影响降至最低至关重要。 因此,制造任何镜头通常都会做出一些妥协。
+
+# Circle of confusion
+
+要解释像差如何使图像模糊,首先要解释一下:什么是混淆圈? 当来自目标的光点到达镜头,然后会聚在传感器上时,它会很清晰。 否则,如果它在传感器之前或之后会聚,则传感器上的光分布会更广。 这可以在图 1 中看到,其中可以看到点光源会聚在传感器上,但随着传感器位置的变化,沿传感器散布的光量也会发生变化。
+
+
+
+光线越分散,图像的焦点就越少。 除非光圈很小,否则图像中彼此距离较大的目标通常会使背景或前景失焦。 这是因为会聚在前景中的光与来自背景中较远目标的光会聚在不同的点。
+
+# Types of Optical Aberration
+
+## Coma(慧差)
+
+
+彗形像差,又称彗星像差,此种像差的分布形状以类似于彗星的拖尾而得名。
+
+
+
+这是一些透镜固有的或是光学设计造成的缺点,导致离开光轴的点光源,例如恒星,产生变形。特别是,彗形像差被定义为偏离入射光孔的放大变异。在折射或衍射的光学系统,特别是在宽光谱范围的影像中,彗形像差是波长的函数。
+
+## Astigmatism (像散)
+
+在两个垂直平面中传播的光线在聚焦于不同点时可能会产生像散。
+
+这可以在图 3 中看到,其中两个焦点由红色水平面和蓝色垂直面表示。 图像中的最佳清晰度点将在这两个点之间,其中任一平面的混淆圈都不太宽。
+
+
+
+当光学器件未对准时,散光会导致图像的侧面和边缘失真。 它通常被描述为在查看图像中的线条时缺乏清晰度。
+
+这种形式的像差可以使用大多数优质光学器件中的适当透镜设计来校正。 固定散光的光学元件的最初设计是由卡尔蔡司完成的,并且已经发展了一百多年。 在这一点上,它通常只出现在质量非常低的镜头中,或者内部光学元件已损坏或通过镜头滴移动的情况下。
+
+## (Petzval) Field Curvature (场曲)
+
+许多镜头都有圆形的焦点。 这会导致图像出现柔和的角,主要是使图像的中心保持在焦点上。 然而,大多数镜头都有一些圆形的焦点,如果不进行一些裁剪,就无法聚焦整个图像。
+
+场曲是图像平面由于多个焦点而变得不平坦的结果。
+
+
+
+相机镜头已在很大程度上纠正了这一点,但在许多镜头上可能会发现一些场曲。 一些传感器制造商实际上正在研究可以校正弯曲焦点区域的弯曲传感器。 这种设计将允许传感器校正像差,而不需要以这种精度生产昂贵的镜头设计。 通过实施这种类型的传感器,可以使用更便宜的镜头来产生高质量的结果。 这方面的真实例子可以在开普勒太空天文台看到,那里使用弯曲的传感器阵列来校正望远镜的大型球面光学元件。
+
+## Distortion (畸变)
+
+畸变是指当一物体通过Lens系统成像时,会产生一种对物体不同部分有不同的放大率的像差,此种像差会导致物像的相似性变坏。但不影响像的清晰度。 根据对物体周边及中心有放大率的差异此种像差可分为两类:
+
+### Barrel distortion (桶形畸变)
+
+具有桶形失真的图像的边缘和侧面远离中心弯曲。 这在视觉上看起来像是图像中有一个凸起,因为它捕获了弯曲视场 (FoV, field of view) 的外观。 例如,当在高层建筑的高处使用较低焦距的镜头(也称为广角镜头)时,可以捕捉到更宽的 FoV。 如图 5 所示,使用产生非常扭曲和宽 FoV 的鱼眼镜头时,这种情况最为夸张。在此图像中,网格线用于帮助说明失真效果如何在靠近侧面的地方向外产生更拉伸的图像, 边缘。
+
+
+
+
+### Pincushion distortion (枕型畸变)
+
+当光线通过枕形畸变向光轴弯曲时,图像看起来会向内拉伸。 因此,图像的边缘和侧面看起来会向图像的中心弯曲。
+
+这种形式的像差最常见于焦距较长的远摄镜头。
+
+
+
+### Mustache distortion
+
+**小胡子畸变**😂是枕形失真和桶形失真的组合。 这会导致图像的内部向外弯曲,而图像的外部向内弯曲。 小胡子失真是一种相当罕见的像差,其中不止一种失真模式会影响图像。 小胡子畸变通常是镜头设计非常糟糕的标志,因为这是导致像差融合的光学错误的高潮。
+
+
+## Chromatic (位置色差)
+
+### Longitudinal / axial aberration
+
+光的颜色代表特定波长的光。 由于折射,彩色图像将有多个波长进入镜头并聚焦在不同的点。 纵向或轴向色差是由不同波长聚焦在沿光轴的不同点引起的。 波长越短,其焦点将离镜头越近,而波长越远,则反之,离镜头越远,如图 8 所示。通过引入较小的孔径,进入的光仍可能聚焦在不同的位置 点,但“混淆圈”的宽度(直径)会小得多,导致不那么剧烈的模糊。
+
+
+
+### Transverse / lateral aberration
+
+导致不同波长沿图像平面分布的离轴光是横向或横向色差。 这会导致图像中主体边缘出现彩色边纹。 这比纵向色差更难校正。
+
+
+
+它可以使用引入不同折射率的消色差双合透镜来固定。 通过将可见光谱的两端置于一个焦点上,可以消除色边。 对于横向和纵向色差,减小光圈的大小也有帮助。 此外,在高对比度环境(即具有非常亮的背景的图像)中不成像目标可能是有益的。 在显微镜中,镜头可能使用复消色差透镜 (APO) 而不是消色差透镜,消色差透镜使用三个透镜元件来校正入射光的所有波长。 当颜色最重要时,确保减轻色差将产生最佳效果。
+
+# Reference
+
+* [SIX OPTICAL ABERRATIONS THAT COULD BE IMPACTING YOUR VISION SYSTEM, https://www.lumenera.com](https://www.lumenera.com/blog/six-optical-aberrations-that-could-be-impacting-your-vision-system)
+* [光学像差重要知识点详解|光学经典理论, 知乎 - 监控李誉](https://zhuanlan.zhihu.com/p/40149006)
\ No newline at end of file
diff --git a/content/Physics/Physics_MOC.md b/content/Physics/Physics_MOC.md
new file mode 100644
index 000000000..a29a9e36a
--- /dev/null
+++ b/content/Physics/Physics_MOC.md
@@ -0,0 +1,10 @@
+---
+title: Physics MOC
+tags:
+- physics
+- MOC
+---
+
+# Electromagnetism
+
+* [Electromagnetism MOC](Physics/Electromagnetism/Electromagnetism_MOC.md)
\ No newline at end of file
diff --git a/content/Physics/Wave/Doppler_Effect.md b/content/Physics/Wave/Doppler_Effect.md
new file mode 100644
index 000000000..218cb9993
--- /dev/null
+++ b/content/Physics/Wave/Doppler_Effect.md
@@ -0,0 +1,47 @@
+---
+title: Doppler Effect
+tags:
+- physics
+- basic
+- wave
+---
+
+多普勒效应(**Doppler effect**)是波源和观察者有相对运动时,观察者接受到波的频率与波源发出的频率并不相同的现象。
+
+远方急驶过来的火车鸣笛声变得尖细(即频率变高,波长变短),而离我们而去的火车鸣笛声变得低沉(即频率变低,波长变长),就是多普勒效应的现象,同样现象也发生在汽车鸣响与火车的敲钟声。
+
+# General
+
+在classical physics中,source的speed和receiver的speed远小于wave在medium中的移动速度,observed frequency $f$和emitted frequency$f_0$关系:
+
+$$
+f = (\frac{c \pm v_r}{c \pm v_s})f_0
+$$
+* $c$是wave在介质中的速度
+* $v_r$是receiver相对于介质的速度,如果receiver向source移动,则分子为加号,反之为减号
+* $v_s$是source相对于介质的速度,如果source远离receiver,则分母为加号,反之为减号
+
+> [!note]
+> 请注意,此关系预测如果源或接收器中的任何一个远离另一个,频率将会降低。
+
+$$
+\frac{f}{v_{wr}} = \frac{f_0}{v_{ws}} = \frac{1}{\lambda}
+$$
+* $v_{\omega r}$是wave speed相对于receiver
+* $v_{\omega s}$是wave speed相对于source
+* $\lambda$是波长
+
+## Example
+
+
+
+其中$v_s = 0.7c$,波前开始在源的右侧(前面)聚集,并在源的左侧(后面)进一步分开。
+
+在前面的receiver会听到higher frequency,也就是$f = \frac{c}{c-0.7c}f_0 = 3.33f_0$;后面的receiver会听到lower frequency,也就是$f = \frac{c}{c + 0.7c}f_0 = 0.59f_0$
+
+
+
+
+# Reference
+
+* [多普勒效应 - Wiki](https://zh.wikipedia.org/wiki/%E5%A4%9A%E6%99%AE%E5%8B%92%E6%95%88%E5%BA%94)
\ No newline at end of file
diff --git a/content/Physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif b/content/Physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif
new file mode 100644
index 000000000..d5bb9a9c5
Binary files /dev/null and b/content/Physics/Wave/attachments/Dopplereffectsourcemovingrightatmach0.7.gif differ
diff --git a/content/Physics/Wave/attachments/Pasted image 20230418153538.png b/content/Physics/Wave/attachments/Pasted image 20230418153538.png
new file mode 100644
index 000000000..c9145108e
Binary files /dev/null and b/content/Physics/Wave/attachments/Pasted image 20230418153538.png differ
diff --git a/content/Report/2023.04.16 天线测试.md b/content/Report/2023.04.16 天线测试.md
new file mode 100644
index 000000000..3b7961157
--- /dev/null
+++ b/content/Report/2023.04.16 天线测试.md
@@ -0,0 +1,50 @@
+
+ 对天线进行测距能力的测试
+
+# 背景
+
+
+
+# 测试结果
+
+## 无穷远距离测量
+
+前方30cm内无反射,超出本雷达测距能力极限,近似为无穷远距离内无反射,得到收集端电压
+
+
+
+以前的天线收集的数据:
+
+
+
+问题在于两点:
+
+* 目前天线稳定性不足
+* 核心信号峰值下降为1.7v左右,而之前核心信号为2.2v
+
+## 实时测距实验
+
+*实时测距实验为在天线段实时测量信号并在前面按照时间放置金属挡板检测天线的测距能力。*
+
+实验大致的放置时间为:
+1. 0-25s,不放置金属挡板
+2. 25-50s,金属挡板贴紧天线
+3. 50-75s,不放置金属挡板
+4. 75-100s,在10cm处放置金属挡板
+5. 100-125s,不放置金属挡板
+6. 125-150s,在20cm处放置金属挡板
+7. 175-200s,不放置金属挡板
+8. 150-175s,在30cm处放置金属挡板
+
+新天线收集数据:
+
+
+
+旧天线收集信号:
+
+
+
+问题在于:
+
+* 新天线信号不稳定,与无穷远测试中的结果吻合。
+* 导致了不同距离的信号区分度丧失
diff --git a/content/Report/attachments/2477544fc674d675ebb328cba3a74b1.png b/content/Report/attachments/2477544fc674d675ebb328cba3a74b1.png
new file mode 100644
index 000000000..b60fd4445
Binary files /dev/null and b/content/Report/attachments/2477544fc674d675ebb328cba3a74b1.png differ
diff --git a/content/Report/attachments/7983094eb03d1dcc285edf9c1768018 1.png b/content/Report/attachments/7983094eb03d1dcc285edf9c1768018 1.png
new file mode 100644
index 000000000..cc28ed8a7
Binary files /dev/null and b/content/Report/attachments/7983094eb03d1dcc285edf9c1768018 1.png differ
diff --git a/content/Report/attachments/7983094eb03d1dcc285edf9c1768018.png b/content/Report/attachments/7983094eb03d1dcc285edf9c1768018.png
new file mode 100644
index 000000000..cc28ed8a7
Binary files /dev/null and b/content/Report/attachments/7983094eb03d1dcc285edf9c1768018.png differ
diff --git a/content/Report/attachments/96251ac46494ab01294e570e352c426.png b/content/Report/attachments/96251ac46494ab01294e570e352c426.png
new file mode 100644
index 000000000..037b8009d
Binary files /dev/null and b/content/Report/attachments/96251ac46494ab01294e570e352c426.png differ
diff --git a/content/Report/attachments/abaec3368e16f2c9be67b5edbba39be.png b/content/Report/attachments/abaec3368e16f2c9be67b5edbba39be.png
new file mode 100644
index 000000000..df31e2ec3
Binary files /dev/null and b/content/Report/attachments/abaec3368e16f2c9be67b5edbba39be.png differ
diff --git a/content/Report/attachments/ac4c5aa53392835d3db04a78e73476b.png b/content/Report/attachments/ac4c5aa53392835d3db04a78e73476b.png
new file mode 100644
index 000000000..3545fe7ef
Binary files /dev/null and b/content/Report/attachments/ac4c5aa53392835d3db04a78e73476b.png differ
diff --git a/content/Report/attachments/f5d557933b15f8ea7f6861f70663d13.png b/content/Report/attachments/f5d557933b15f8ea7f6861f70663d13.png
new file mode 100644
index 000000000..61fb0da4d
Binary files /dev/null and b/content/Report/attachments/f5d557933b15f8ea7f6861f70663d13.png differ
diff --git a/content/_index.md b/content/_index.md
new file mode 100644
index 000000000..e173a14ed
--- /dev/null
+++ b/content/_index.md
@@ -0,0 +1,25 @@
+---
+title: "Home"
+tags:
+- catalog
+- MOC
+---
+🕵️♂️ This is Jude Wang's vault about his notebook, his knowledge, his second brain.
+
+☁️ Here to find his notes:
+
+* [📒Notes](atlas.md)
+
+🏔 To know more about me:
+
+* [🍉Resume](resume.md)
+
+
+
+
+
+
+
diff --git a/content/assets/pdf/NUS_Transcript.pdf.md b/content/assets/pdf/NUS_Transcript.pdf.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/content/atlas.md b/content/atlas.md
new file mode 100644
index 000000000..f3335d391
--- /dev/null
+++ b/content/atlas.md
@@ -0,0 +1,62 @@
+---
+title: Atlas - Map of Maps
+tags:
+- MOC
+---
+
+🚧 There are notebooks about his research career:
+
+* [Deep Learning & Machine Learning](computer_sci/deep_learning_and_machine_learning/Deep%20_Learning_MOC.md)
+
+* [[synthetic_aperture_radar_imaging/SAR_MOC| Synthetic Aperture Radar(SAR) Imaging]]
+
+
+💻 Also, his research needs some basic science to support
+
+* [Data Structure and Algorithm MOC](computer_sci/data_structure_and_algorithm/MOC.md)
+
+* [Hardware](computer_sci/Hardware/Hardware_MOC.md)
+
+* [Physics](Physics/Physics_MOC.md)
+
+* [Signal Processing](signal_processing/signal_processing_MOC.md)
+
+* [Data Science](data_sci/data_sci_MOC.md)
+
+* [About coding language design detail](computer_sci/coding_knowledge/coding_lang_MOC.md)
+
+* [Math](Math/MOC.md)
+
+* [Computational Geometry](computer_sci/computational_geometry/MOC.md)
+
+* [Code Framework Learn](computer_sci/code_frame_learn/MOC.md)
+
+🦺 I also need some tool to help me:
+
+* [Git](toolkit/git/git_MOC.md)
+
+💻 Code Practice:
+
+* [💽Programing Problem Solution Record](https://github.com/PinkR1ver/JudeW-Problemset)
+
+🛶 Also, he learn some knowledge about his hobbies:
+
+* [📷 Photography](Photography/Photography_MOC.md)
+
+* [📮文学](文学/文学_MOC.md)
+
+* [🥐Food](food/MOC.md)
+
+* [🎬Watching List](https://pinkr1ver.notion.site/5e136466f3664ff1aaaa75b85446e5b4?v=a41efbce52a84f7aa89d8f649f4620f6&pvs=4)
+
+⭐ Here to find my recent study:
+
+* [Recent notes (this function cannot be used on web)](recent.md)
+* [Papers Recently Read](research_career/papers_read.md)
+
+🎏 I also have some plans in my mind to do;
+
+* [Life List🚀](plan/life.md)
+
+☁️ I also have some daily thoughts:
+* [Logs](log/log_MOC.md)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Deep _Learning_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Deep _Learning_MOC.md
new file mode 100644
index 000000000..ec42b171c
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Deep _Learning_MOC.md
@@ -0,0 +1,22 @@
+---
+title: Deep Learning - MOC
+tags:
+- MOC
+- deep-learning
+---
+
+# Tech Explanation
+
+* [⭐Deep Learning MOC](computer_sci/deep_learning_and_machine_learning/deep_learning/deep_learning_MOC.md)
+
+* [✨Machine Learning MOC](computer_sci/deep_learning_and_machine_learning/machine_learning/MOC.md)
+
+* [LLM - MOC](computer_sci/deep_learning_and_machine_learning/LLM/LLM_MOC.md)
+
+# Deep-learning Research
+
+* [Model Interpretability](computer_sci/deep_learning_and_machine_learning/Model_interpretability/Model_Interpretability_MOC.md)
+
+* [Famous Model - MOC](computer_sci/deep_learning_and_machine_learning/Famous_Model/Famous_Model_MOC.md)
+
+* [Model Evaluation - MOC](computer_sci/deep_learning_and_machine_learning/Evaluation/model_evaluation_MOC.md)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526161419.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526161419.png
new file mode 100644
index 000000000..76c404bdf
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526161419.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526161422.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526161422.png
new file mode 100644
index 000000000..76c404bdf
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526161422.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526162035.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526162035.png
new file mode 100644
index 000000000..16bffe91e
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526162035.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526162839.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526162839.png
new file mode 100644
index 000000000..c3280de52
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526162839.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526163614.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526163614.png
new file mode 100644
index 000000000..948d2f3f8
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526163614.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526164105.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526164105.png
new file mode 100644
index 000000000..75965a471
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526164105.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526164106.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526164106.png
new file mode 100644
index 000000000..75965a471
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230526164106.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130501.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130501.png
new file mode 100644
index 000000000..a8cbbc247
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130501.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130509.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130509.png
new file mode 100644
index 000000000..c126dc38b
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130509.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130856.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130856.png
new file mode 100644
index 000000000..afb26188a
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/attachments/Pasted image 20230529130856.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/model_evaluation_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/model_evaluation_MOC.md
new file mode 100644
index 000000000..97c7ae3e9
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/model_evaluation_MOC.md
@@ -0,0 +1,8 @@
+---
+title: Model Evaluation - MOC
+tags:
+- deep-learning
+- evaluation
+---
+
+* [Model Evaluation in Time Series Forecasting](computer_sci/deep_learning_and_machine_learning/Evaluation/time_series_forecasting.md)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/time_series_forecasting.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/time_series_forecasting.md
new file mode 100644
index 000000000..2d3031ea7
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Evaluation/time_series_forecasting.md
@@ -0,0 +1,121 @@
+---
+title: Model Evaluation in Time Series Forecasting
+tags:
+- deep-learning
+- evaluation
+- time-series-dealing
+---
+
+
+
+# Some famous time series scoring technics
+
+1. **MAE, RMSE and AIC**
+2. **Mean Forecast Accuracy**
+3. **Warning: The time series model EVALUATION TRAP!**
+4. **RdR Score Benchmark**
+
+## MAE, RMSE, AIC
+
+MAE means **Mean Absolute Error (MAE)** and RMSE means **Root Mean Squared Error (RMSE)**.
+
+这是两个衡量 continuous variables的accuracy的著名指标,MAE在以前的文章中被时常使用,16年的观察已经发现RMSE或者其他version的R-squared逐渐被使用起来
+
+*我们需要了解何时使用哪种指标会更好*
+
+### MAE
+
+$$
+\text{MAE} = \frac{1}{n}\sum_{j=1}^n |y_j - \hat{y}_j|
+$$
+MAE的特点在于所有individual difference有着equal weight
+
+如果将绝对值去掉,MAE会变成**Mean Bias Error (MBE)**,使用MBE时,要注意正反bias相互抵消
+
+### RMSE
+
+$$
+\text{RMSE} = \sqrt{\frac{1}{n} \sum_{j=1}^n (y_j - \hat{y}_j)^2}
+$$
+
+均方根误差(RMSE)是一种二次评分规则,它还测量误差的平均幅度。它是预测值和实际观测值之间差异的平方的平均值的平方根。
+
+### AIC
+
+$$
+\text{AIC} = 2k - 2\ln{(\hat{L})}
+$$
+$k$是模型参数的估计,$\hat{L}$是模型似然函数(likelihood function)的最大化值
+
+**Akaike information criterion**,赤池信息准则(AIC)是一个有助于比较模型的指标,因为它同时考虑了模型对数据的拟合程度和模型的复杂性。
+
+AIC衡量信息的损失并**对模型的复杂性进行惩罚**。它是*参数数量惩罚后的负对数似然函数*。AIC的主要思想是模型参数越少越好。**AIC允许您测试模型在不过拟合数据集的情况下拟合数据的程度**
+
+### Comparison
+
+#### Similarities between MAE and RMSE
+
+均方误差(MAE)和均方根误差(RMSE)都以感兴趣变量的单位来表示平均模型预测误差。这两个指标都可以在0到∞的范围内变化,并且对误差的方向不敏感。它们是负向评分指标,也就是说数值越低越好。
+
+#### Differences between MAE and RMSE
+
+*由于误差在求平均之前被平方,RMSE对大误差给予相对较高的权重*。这意味着在特别不希望出现大误差的情况下,RMSE应该更有用;而在MAE的平均值中,这些大误差将被稀释,
+
+
+
+AIC the lower is better,但没有perfect score,只能用来相同dataset下不同model的性能
+
+## Mean Forecast Accuracy
+
+
+
+计算每个点的Forecast Accuracy,然后求平均,得到 Mean Forecast Accuracy
+
+Mean Forecast Accuracy的重大缺陷在大的偏离值造成巨大的负面影响,比如$1 - \frac{|\hat{y}_j - y_j|}{y_j} = 1 - \frac{250-25}{25} = -800\%$
+
+解决方案是将Forecast Accuracy的最小值限制为0%,同时可以使用Median代替Mean。
+
+一般来说,**当你的误差分布偏斜时,你应该使用 Median 而不是 Mean**。 在某些情况下,Mean Forecast Accuray也可能毫无意义。 如果你还记得你的统计数据; 变异系数 (**coefficient of variation**, CV) 表示标准偏差与平均值的比率($\text{CV} = (\text{Standard Deviation}/\text{Mean} * 100)$)。 大 CV 值意味着大变异性,这也意味着围绕均值的离差程度更大。 **例如,我们可以将 CV 高于 0.7 的任何事物视为高度可变且不可真正预测的。 另外,还可以说明你的预测模型预测能力很不稳定!**
+
+## RdR Score Benchmark (这是一个具有实验性的指标,blogger指出这个指标并没有在research paper出现过)
+
+RdR metric stands for:
+* *R*: **Naïve Random Walk**
+* *d*: **Dynamic Time Warping**
+* *R*: **Root Mean Squared Error**
+
+### DTW to deal with shape similarity
+
+
+
+RMSE、MAE这些指标都没有考虑到一个重要的标准:**THE SHAPE SIMILARITY**
+
+RdR Score Benchmark使用 [**Dynamic Time Warping(DTW,动态时间调整)** ](computer_sci/deep_learning_and_machine_learning/Trick/DTW.md)作为shape similarity的指标
+
+
+欧氏距离在时间序列之间可能是一个不好的选择,因为时间轴上存在扭曲的情况。
+
+* DTW:通过“同步”/“对齐”时间轴上的不同信号,找到两个时间序列之间的最佳(最小距离)扭曲路径
+
+### RdR score means
+
+
+
+
+
+*RdR score*通过RMSE和DTW distance来计算,用于比较你的model和Radnom Walk(*Random Walk的RdR score = 0*)相比的优越性
+
+### RdR calculation details
+
+可以通过绘制 RMSE vs. DTW来计算RdR score,绘制的图如下所示:
+
+
+
+
+计算矩阵面积来计算RdR score,(文章里并没有完整介绍计算,在[github code](https://github.com/CoteDave/blog/tree/master/RdR%20score)里有,并不确定)
+
+# Reference
+
+* M.Sc, Dave Cote. “RdR Score Metric for Evaluating Time Series Forecasting Models.” _Medium_, 8 Feb. 2022, https://medium.com/@dave.cote.msc/rdr-score-metric-for-evaluating-time-series-forecasting-models-1c23f92f80e7.
+* JJ. “MAE and RMSE — Which Metric Is Better?” _Human in a Machine World_, 23 Mar. 2016, https://medium.com/human-in-a-machine-world/mae-and-rmse-which-metric-is-better-e60ac3bde13d.
+* _Accelerating Dynamic Time Warping Subsequence Search with GPU_. https://www.slideshare.net/DavideNardone/accelerating-dynamic-time-warping-subsequence-search-with-gpu. Accessed 29 May 2023.
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/DeepAR.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/DeepAR.md
new file mode 100644
index 000000000..a46307a5c
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/DeepAR.md
@@ -0,0 +1,77 @@
+---
+title: DeepAR - Time Series Forcasting
+tags:
+- deep-learning
+- model
+- time-series-dealing
+---
+
+DeepAR, an autoregressive recurrent network developed by Amazon, is the first model that could natively work on multiple time-series. It's a milestone in time-series community.
+
+# What is DeepAR
+
+> [!quote]
+> DeepAR is the first successful model to combine Deep Learning with traditional Probabilistic Forecasting.
+
+* **Multiple time-series support**
+* **Extra covariates**: *DeepAR* allows extra features, covariates. It is very important for me when I learn *DeepAR*, because in my task, I have corresponding feature for each time series.
+* **Probabilistic output**: Instead of making a single prediction, the model leverages [**quantile loss**](computer_sci/deep_learning_and_machine_learning/Trick/quantile_loss.md) to output prediction intervals.
+* **“Cold” forecasting:** By learning from thousands of time-series that potentially share a few similarities, _DeepAR_ can provide forecasts for time-series that have little or no history at all.
+
+# Block used in DeepAR
+
+* [LSTM](computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md)
+
+# *DeepAR* Architecture
+
+DeepAR模型并不直接使用LSTMs去计算prediction,而是去估计Gaussian likelihood function的参数,即$\theta=(\mu,\sigma)$,估计Gaussian likelihood function的mean和standard deviation。
+
+## Training Step-by-Step
+
+
+
+假设目前我们在time-series $i$ 的 t 时刻,
+
+1. LSTM cell会输入covariates $x_{i,t}$,即$x_i$在t时刻的值,还有上一时刻的target variable,$z_{i,t-1}$,LSTM还需要输入上一时刻的隐藏状态$h_{i,t-1}$
+2. LSTM紧接着就会输出当前的hidden state $h_{i,t}$,会输入到下一步中
+3. Gaussian likelihood function里的parameter,$\mu$和$\sigma$会从$h_{i,t}$中不直接计算出,计算细节在后面
+
+> [!quote]
+> 换言之,这个模型是为了得到最好的$\mu$和$\sigma$去构建gaussian distribution,让预测更接近$z_{i,t}$;同时,因为*DeepAR*每次都是train and predicts a single data point,所以这个模型也被称为autoregressive模型
+
+
+## Inference Step-by-Step
+
+
+
+
+
+在使用model进行预测的时候,某一改变的就是使用预测值$\hat{z}$ 代替真实值$z$,同时$\hat{z}_{i,t}$是在我们模型学习到的Gaussian distribution里sample得到的,而这个Gaussian distribution里的参数$\mu$和$\sigma$并不是model直接学习到的,*DeepAR*如何做到这一点的呢?
+
+# Gaussian Likelihood
+
+$$
+\ell_G(z|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp{(-\frac{(z-\mu)^2)}{2\sigma^2}}
+$$
+
+Estimate gaussian distribution的任务一般会被转化成maximize gaussian log-likelihood function的任务,即**MLEformulas**(maximum log-likelihood estimators)
+**Gaussian log-likelihood function**:
+
+$$
+\mathcal{L} = \sum_{i=1}^{N}\sum_{t=t_o}^{T} \log{\ell(z_{i,t}|\theta(h_{i,t}))}
+$$
+
+
+# Parameter estimation in *DeepAR*
+
+
+在统计学中,预估Gaussian Distribution一般使用MLEformulas,但是在*DeepAR*中,并不这么去做,而是使用两个dense layer去做预估,如下图:
+
+
+
+使用dense layer的方式去预估Gaussian distribution的原因在于,可以使用backpropagation
+
+
+# Reference
+
+* [https://towardsdatascience.com/deepar-mastering-time-series-forecasting-with-deep-learning-bc717771ce85](https://towardsdatascience.com/deepar-mastering-time-series-forecasting-with-deep-learning-bc717771ce85)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/Famous_Model_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/Famous_Model_MOC.md
new file mode 100644
index 000000000..73ae5b214
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/Famous_Model_MOC.md
@@ -0,0 +1,11 @@
+---
+title: Famous Model MOC
+tags:
+- deep-learning
+- MOC
+---
+
+# Time-series
+
+* [DeepAR](computer_sci/deep_learning_and_machine_learning/Famous_Model/DeepAR.md)
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/Temporal_Fusion_Transformer.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/Temporal_Fusion_Transformer.md
new file mode 100644
index 000000000..d4dc84e72
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/Temporal_Fusion_Transformer.md
@@ -0,0 +1,8 @@
+---
+title: Temporal Fusion Transformer
+tags:
+- deep-learning
+- model
+- time-series-dealing
+---
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523134253.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523134253.png
new file mode 100644
index 000000000..60ad80e43
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523134253.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523134255.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523134255.png
new file mode 100644
index 000000000..60ad80e43
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523134255.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523141219.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523141219.png
new file mode 100644
index 000000000..a0987e1df
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523141219.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523151201.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523151201.png
new file mode 100644
index 000000000..4911ae0e1
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Famous_Model/attachments/Pasted image 20230523151201.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/LLM_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/LLM_MOC.md
new file mode 100644
index 000000000..1a3a92952
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/LLM_MOC.md
@@ -0,0 +1,25 @@
+---
+title: Large Language Model(LLM) - MOC
+tags:
+- deep-learning
+- LLM
+- NLP
+---
+
+# Training
+
+* [Training Tech Outline](computer_sci/deep_learning_and_machine_learning/LLM/train/steps.md)
+* [⭐⭐⭐Train LLM from scratch](computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md)
+* [⭐⭐⭐Detailed explanation of RLHF technology](computer_sci/deep_learning_and_machine_learning/LLM/train/RLHF.md)
+* [How to do use fine tune tech to create your chatbot](computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/how_to_fine_tune.md)
+* [Learn finetune by Stanford Alpaca](computer_sci/deep_learning_and_machine_learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md)
+
+# Metrics
+
+How to evaluate a LLM performance?
+
+* [Tasks to evaluate BERT - Maybe can be deployed in other LM](computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md)
+
+# Basic
+
+* [LLM Hyperparameter](computer_sci/deep_learning_and_machine_learning/LLM/basic/llm_hyperparameter.md)
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/1687853622172.mp4 b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/1687853622172.mp4
new file mode 100644
index 000000000..248c3b417
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/1687853622172.mp4 differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627160123.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627160123.png
new file mode 100644
index 000000000..72e7c63b5
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627160123.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627160125.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627160125.png
new file mode 100644
index 000000000..72e7c63b5
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627160125.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627162848.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627162848.png
new file mode 100644
index 000000000..b8612d971
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627162848.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627163514.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627163514.png
new file mode 100644
index 000000000..81f16a195
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627163514.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627165311.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627165311.png
new file mode 100644
index 000000000..5163f06f8
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/Pasted image 20230627165311.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/physic_temp.gif b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/physic_temp.gif
new file mode 100644
index 000000000..a2335b731
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/physic_temp.gif differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/rating_probabililty.gif b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/rating_probabililty.gif
new file mode 100644
index 000000000..780d3de8b
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/attachments/rating_probabililty.gif differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/llm_hyperparameter.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/llm_hyperparameter.md
new file mode 100644
index 000000000..a0f9deed9
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/basic/llm_hyperparameter.md
@@ -0,0 +1,56 @@
+---
+title: LLM hyperparameter
+tags:
+- hyperparameter
+- LLM
+- deep-learning
+- basic
+---
+
+# LLM Temperature
+
+Temperature definition come from the physical meaning of temperature. The more higher temperature, the atoms moving more faster, meaning more randomness.
+
+
+
+LLM temperature is a hyperparameter that regulates **the randomness, or creativity.**
+
+* Higher the LLM temperature, more diverse and creative, increasing likelihood of straying from context.
+* Lower the LLM temperature, more focused and deterministic, sticking closely to the most likely prediction
+
+
+
+## More detail
+
+The LLM model is to give a probability of next word, like this:
+
+
+
+"A cat is chasing a …", there are lots of words can be filled in that blank. Different words have different probabilities, in the model, we output the next word ratings.
+
+Sure, we can always pick the highest rating word, but that would result in very standard predictable boring sentences, and the model wouldn't be equivalent to human language, because we don't always use the most common word either.
+
+So, we want to design a mechanism that **allows all words with a decent rating to occur with a reasonable probability**, that's why we need temperature in LLM model.
+
+Like real physic world, we can do samples to describe the distribution, *we use SoftMax to describe the distribution of the probability of the next word*. The temperature is the element $T$ in the formula:
+
+$$
+p_i = \frac{\exp{(\frac{R_i}{T})}}{\sum_i \exp{(\frac{R_i}{T})}}
+$$
+
+
+
+More lower the $T$, the higher rating word's probability will goes to 100%, and more higher the $T$, the probability will be more smoother for very words.
+
+*The gif below is important and intuitive.*
+
+
+
+So, set different $T$, the next word's probability will be changed, we will output next word depending on the probability.
+
+
+
+# Reference
+
+* [LLM Temperature, dedpchecks](https://deepchecks.com/glossary/llm-parameters/#:~:text=One%20intriguing%20parameter%20within%20LLMs,of%20straying%20from%20the%20context.)
+* [⭐⭐⭐https://www.youtube.com/watch?v=YjVuJjmgclU](https://www.youtube.com/watch?v=YjVuJjmgclU)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/langchain/attachments/Pasted image 20230627154149.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/langchain/attachments/Pasted image 20230627154149.png
new file mode 100644
index 000000000..520150e74
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/langchain/attachments/Pasted image 20230627154149.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/langchain/langchain_basic.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/langchain/langchain_basic.md
new file mode 100644
index 000000000..cee907c98
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/langchain/langchain_basic.md
@@ -0,0 +1,44 @@
+---
+title: LangChain Explained
+tags:
+- LLM
+- basic
+- langchain
+---
+
+# What is LangChain
+
+LangChain is an open source framework that allows AI developers to combine LLMs like GPT-4 *with external sources of computation and data*.
+
+# Why LangChain
+
+LangChain can make LLM answer question depending on your own documents. It can help you doing lots of amazing apps.
+
+You can use LangChain to make GPT to do analysis on your own company data, booking flight depending on schedule. summarizing abstract on bunches of PDFs, .….
+
+# LangChain value propositions
+
+## Components
+
+* LLM Wrappers
+* Prompt Templates
+* Indexes for relevant information retrieval
+
+## Chains
+
+Assemble components to solve a specific task - finding info in a book...
+
+## Agents
+
+Agents allow LLMs to interact with it's environment. - For instance, make API request with a specific action
+
+# LangChain Framework
+
+
+
+
+
+# Reference
+
+* [https://www.youtube.com/watch?v=aywZrzNaKjs](https://www.youtube.com/watch?v=aywZrzNaKjs)
+*
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/attachments/Pasted image 20230629140914.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/attachments/Pasted image 20230629140914.png
new file mode 100644
index 000000000..769437eed
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/attachments/Pasted image 20230629140914.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/attachments/Pasted image 20230629140929.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/attachments/Pasted image 20230629140929.png
new file mode 100644
index 000000000..769437eed
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/attachments/Pasted image 20230629140929.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/some_task.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/some_task.md
new file mode 100644
index 000000000..d10d8f3a6
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/metircs/some_task.md
@@ -0,0 +1,36 @@
+---
+title: Tasks to evaluate BERT - Maybe can be deployed in other LM
+tags:
+- LLM
+- metircs
+- deep-learning
+- benchmark
+---
+
+# Overview
+
+
+
+# MNLI-m (Multi-Genre Natural Language Inference - Matched):
+
+MNLI-m is a benchmark dataset and task for natural language inference (NLI). The goal of NLI is to determine the logical relationship between two given sentences: whether the relationship is "entailment," "contradiction," or "neutral." MNLI-m focuses on matched data, which means the sentences are drawn from the same genres as the sentences in the training set. It is part of the GLUE (General Language Understanding Evaluation) benchmark, which evaluates the performance of models on various natural language understanding tasks.
+
+# QNLI (Question Natural Language Inference):
+
+QNLI is another NLI task included in the GLUE benchmark. In this task, the model is given a sentence that is a premise and a sentence that is a question related to the premise. The goal is to determine whether the answer to the question can be inferred from the given premise. The dataset for QNLI is derived from the Stanford Question Answering Dataset (SQuAD).
+
+# MRPC (Microsoft Research Paraphrase Corpus):
+
+MRPC is a dataset used for paraphrase identification or semantic equivalence detection. It consists of sentence pairs from various sources that are labeled as either paraphrases or not. The task is to classify whether a given sentence pair expresses the same meaning (paraphrase) or not. MRPC is also part of the GLUE benchmark and helps evaluate models' ability to understand sentence similarity and equivalence.
+
+# SST-2 (Stanford Sentiment Treebank - Binary Sentiment Classification):
+
+SST-2 is a binary sentiment classification task based on the Stanford Sentiment Treebank dataset. The dataset contains sentences from movie reviews labeled as either positive or negative sentiment. The task is to classify a given sentence as expressing a positive or negative sentiment. SST-2 is often used to evaluate the ability of models to understand and classify sentiment in natural language.
+
+# SQuAD (Stanford Question Answering Dataset):
+
+SQuAD is a widely known dataset and task for machine reading comprehension. It consists of questions posed by humans on a set of Wikipedia articles, where the answers to the questions are spans of text from the corresponding articles. The goal is to build models that can accurately answer the questions based on the provided context. SQuAD has been instrumental in advancing the field of question answering and evaluating models' reading comprehension capabilities.
+
+Overall, these tasks and datasets serve as benchmarks for evaluating natural language understanding and processing models. They cover a range of language understanding tasks, including natural language inference, paraphrase identification, sentiment analysis, and machine reading comprehension.
+
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/RLHF.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/RLHF.md
new file mode 100644
index 000000000..8f72b3060
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/RLHF.md
@@ -0,0 +1,65 @@
+---
+title: Reinforcement Learning from Human Feedback
+tags:
+- LLM
+- deep-learning
+- RLHF
+- LLM-training-method
+---
+
+
+# Review: Reinforcement Learning Basics
+
+
+
+
+Reinforcement learning is a mathematical framework.
+
+Demystify the reinforcement learning model, it's a open-ended model using reward function to optimize agent to solve complex task in target environment.
+
+
+
+# Step by Step
+
+For RLHF training method, here are three core steps:
+
+1. Pretraining a language model
+2. Gathering data(问答数据) and training a reward model
+3. Fine-tuning the LM with reinforcement learning
+
+## Step 1. Pretraining Language Models
+
+Read this to learn how to train a LM:
+
+[Pretraining language models](computer_sci/deep_learning_and_machine_learning/LLM/train/train_LLM.md)
+
+OpenAI used a smaller version of GPT-3 for its first popular RLHF model - InstructGPT.
+
+Nowadays, RLHF is new area, there's no answer to which model is the best for starting point of RLHF and using expensive augmented data to fine-tune is not necessarily.
+
+## Step 2. Reward model training
+
+In reward model, we integrate human preferences into the system.
+
+
+
+
+
+# Reference
+
+* [Reinforcement Learning from Human Feedback: From Zero to chatGPT, YouTube, HuggingFace](https://www.youtube.com/watch?v=2MBJOuVq380)
+* [Hugging Face blog, ChatGPT 背后的“功臣”——RLHF 技术详解](https://huggingface.co/blog/zh/rlhf)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628145009.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628145009.png
new file mode 100644
index 000000000..991eeb711
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628145009.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628160836.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628160836.png
new file mode 100644
index 000000000..a8b01f8e8
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628160836.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628161627.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628161627.png
new file mode 100644
index 000000000..67f495fa0
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230628161627.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230629104307.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230629104307.png
new file mode 100644
index 000000000..c421de4fb
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230629104307.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230629145231.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230629145231.png
new file mode 100644
index 000000000..3be6d002d
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/attachments/Pasted image 20230629145231.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/dataset/make_custom_dataset.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/dataset/make_custom_dataset.md
new file mode 100644
index 000000000..483defd34
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/dataset/make_custom_dataset.md
@@ -0,0 +1,8 @@
+---
+title: How to make custom dataset?
+tags:
+- dataset
+- LLM
+- deep-learning
+---
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/attachments/Pasted image 20230627145954.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/attachments/Pasted image 20230627145954.png
new file mode 100644
index 000000000..46490c09f
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/attachments/Pasted image 20230627145954.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/how_to_fine_tune.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/how_to_fine_tune.md
new file mode 100644
index 000000000..b5ed6332e
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/how_to_fine_tune.md
@@ -0,0 +1,7 @@
+---
+title: How to do use fine tune tech to create your chatbot
+tags:
+- deep-learning
+- LLM
+---
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md
new file mode 100644
index 000000000..ee88ff0e9
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/finr_tune/learn_finetune_byStanfordAlpaca.md
@@ -0,0 +1,19 @@
+---
+title: Learn finetune by Stanford Alpaca
+tags:
+- deep-learning
+- LLM
+- fine-tune
+- LLaMA
+---
+
+
+
+
+
+
+
+# Reference
+
+* [https://www.youtube.com/watch?v=pcszoCYw3vc](https://www.youtube.com/watch?v=pcszoCYw3vc)
+* [https://crfm.stanford.edu/2023/03/13/alpaca.html](https://crfm.stanford.edu/2023/03/13/alpaca.html)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/steps.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/steps.md
new file mode 100644
index 000000000..d31c00085
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/steps.md
@@ -0,0 +1,24 @@
+---
+title: LLM training steps
+tags:
+- LLM
+- deep-learning
+---
+
+训练大型语言模型(LLM)的方法通常涉及以下步骤:
+
+1. **数据收集**:收集大规模的文本数据作为训练数据。这些数据可以是互联网上的文本、书籍、文章、新闻、对话记录等。数据的质量和多样性对于训练出高质量的LLM非常重要。
+
+2. **预处理**:对数据进行预处理以使其适合模型训练。这包括分词(将文本划分为词或子词单元)、建立词汇表(将词映射到数字表示)、清理和规范化文本等操作。
+
+3. **构建模型架构**:选择适当的模型架构来构建LLM。目前最常用的模型架构是Transformer,其中包含多层的自注意力机制和前馈神经网络层。
+
+4. **预训练**:使用大规模的文本数据集对模型进行预训练。预训练是指在无监督的情况下,通过让模型学习预测缺失的词语或下一个词语等任务来提取语言知识。这使得模型能够学习到丰富的语言表示。
+
+5. **微调(Fine-tuning)**:在预训练之后,使用特定的任务数据对模型进行微调。微调是指在特定任务的标注数据上进行有监督的训练,例如文本生成、问题回答等。通过微调,模型可以更好地适应特定任务的要求。
+
+6. **超参数调优**:调整模型的超参数,例如学习率、批量大小、模型层数等,以获得更好的性能和效果。
+
+7. **评估和迭代**:对训练后的模型进行评估,并根据评估结果进行迭代改进。这可能包括调整模型架构、增加训练数据、调整训练策略等。
+
+这些步骤通常是迭代进行的,通过不断的训练和改进,使LLM能够在各种自然语言处理任务中展现出更好的性能和生成能力。值得注意的是,LLM的训练需要大量的计算资源和时间,并且通常由专业团队在大规模的计算环境中进行。
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/train_LLM.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/train_LLM.md
new file mode 100644
index 000000000..5cea2bae0
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/LLM/train/train_LLM.md
@@ -0,0 +1,143 @@
+---
+title: Train LLM from scratch
+tags:
+- LLM
+- LLM-training-method
+- deep-learning
+---
+
+# Find a dataset
+
+Find a corpus of text in language you prefer.
+* Such as [OSCAR](https://oscar-project.org/)
+
+Intuitively, the more data you can get to pretrain on, the better results you will get.
+
+# Train a tokenizer
+
+There are something you need take into consideration when train a tokenizer
+
+## Tokenization
+
+You can read more detailed post - [Tokenization](computer_sci/deep_learning_and_machine_learning/NLP/basic/tokenization.md)
+
+Tokenization is the process of **breaking text into words of sentences**. These tokens helps machine to learn context of the text. This helps in *interpreting the meaning behind the text*. Hence, tokenization is *the first and foremost process while working on the text*. Once the tokenization is performed on the corpus, the resulted tokens can be used to prepare vocabulary which can be used for further steps to train the model.
+
+Example:
+
+“The city is on the river bank” -> “The”, ”city”, ”is”, ”on”, ”the”, ”river”, ”bank”
+
+Here are some typical tokenization:
+* Word ( White Space ) Tokenization
+* Character Tokenization
+* **Subword Tokenization (SOTA)**
+
+
+Subword Tokenization can handle OOV(Out Of Vocabulary) problem effectively.
+
+### Subword Tokenization Algorithm
+
+* **Byte pair encoding** *(BPE)*
+* **Byte-level byte pair encoding**
+* **WordPiece**
+* **unigram**
+* **SentencePiece**
+
+## Word embedding
+
+After tokenization, we make our text into token. We also wants to present token in math type. Here we use word embedding technique, converting word to math.
+
+Here are some typical word embedding algorithms:
+
+* **Word2Vec**
+ * skip-gram
+ * continuous bag-of-words (CBOW)
+* **GloVe** (Global Vectors for Word Representations)
+* **FastText**
+* **ELMo** (Embeddings from Language Models)
+* **BERT** (Bidirectional Encoder Representations from Transformers)
+ * a language model rather than a traditional word embedding algorithm. **While BERT does generate word embeddings as a byproduct of its training process**, its primary purpose is to learn contextualized representations of words and text segments.
+
+# Train a language model from scratch
+
+We need clear the definition of language model.
+
+## Language model definition
+
+Simply to say, the language model is a computational model or algorithm that is designed to understand and generate human language. It is a type of artificial intelligence(AI) model that uses *statistical and probabilistic techniques to predict and generate sequences of words and sentences*.
+
+It captures the statistical relationships between words or characters and *builds a probability distribution of the likelihood of a particular word or sequence of words appearing in a given context.*
+
+Language model can be used for various NLP tasks, including machine translation, speech recognition, text generation and so on....
+
+As usual, a language model takes a seed input or prompt and uses its *learned knowledge of language(model weights)* to predict most likely words or characters to follow.
+
+The SOTA of language model today is GPT-4.
+
+## Language model algorithm
+
+
+### Classical LM
+
+* **n-gram**
+ * N-gram can be used as *both a tokenization algorithm and a component of a language model*. In my searching experience, n-grams are easier to understand as a language model to predict a likelihood distribution.
+* **HMMs** (Hidden Markov Models)
+* **RNNs** (Recurrent Neural Networks)
+
+### Cutting-edge
+
+* **GPT** (Generative Pre-trained Transformer)
+* **BERT** (Bidirectional Encoder Representations from Transformers)
+* **T5** (Text-To-Text Transfer Transformer)
+* **Megatron-LM**
+
+## Train Method
+
+Different designed models usually have different training methods. Here we take BERT-like model as example.
+
+### BERT-Like model
+
+
+
+To train BERT-Like model, we'll train it on a task of **Masked Language Modeling**(MLM), i.e. the predict how to fill arbitrary tokens that we randomly mask in the dataset.
+
+Also, we'll train BERT-Like model using **Next Sentence Prediction** (NSP). *MLM teaches BERT to understand relationships between words and NSP teaches BERT to understand long-term dependencies across sentences.* In NSP training, give BERT two sentences, A and B, then BERT will determine B is A's next sentence or not, i.e. outputting `IsNextSentence` or `NotNextSentence`
+
+With NSP training, BERT will have better performance.
+
+| Task | MNLI-m (acc) | QNLI (acc) | MRPC (acc) | SST-2 (acc) | SQuAD (f1) |
+| --- | --- | --- | --- | --- | --- |
+| With NSP | 84.4 | 88.4 | 86.7 | 92.7 | 88.5 |
+| Without NSP | 83.9 | 84.9 | 86.5 | 92.6 | 87.9 |
+
+[Table source](https://arxiv.org/pdf/1810.04805.pdf)
+[Table metrics explain](computer_sci/deep_learning_and_machine_learning/LLM/metircs/some_task.md)
+
+
+# Check LM actually trained
+
+## Take BERT as example
+
+Aside from looking at the training and eval losses going down, we can check our model using `FillMaskPipeline`.
+
+This is a method input *a masked token (here, ``) and return a list of the most probable filled sequences, with their probabilities.*
+
+With this method, we can see our LM captures more semantic knowledge or even some sort of (statistical) common sense reasoning.
+
+# Fine-tune our LM on a downstream task
+
+Finally, we can fine-tune our LM on a downstream task such as translation, chatbot, text generation and so on.
+
+Different downstream task may need different methods to do fine-tune.
+
+# Example
+
+[https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=G-kkz81OY6xH](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=G-kkz81OY6xH)
+
+
+# Reference
+
+* [HuggingFace blog, How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train)
+* [Medium blog, NLP Tokenization](https://medium.com/nerd-for-tech/nlp-tokenization-2fdec7536d17)
+* [Radford, A., Narasimhan, K., Salimans, T. & Sutskever, I. (2018). Improving language understanding by generative pre-training. , .](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/Model_Interpretability_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/Model_Interpretability_MOC.md
new file mode 100644
index 000000000..b1b56f005
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/Model_Interpretability_MOC.md
@@ -0,0 +1,9 @@
+---
+title: Model Interpretability - MOC
+tags:
+- MOC
+- deep-learning
+- interpretability
+---
+
+* [SHAP](computer_sci/deep_learning_and_machine_learning/Model_interpretability/SHAP.md)
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/SHAP.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/SHAP.md
new file mode 100644
index 000000000..ad4a91d78
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/SHAP.md
@@ -0,0 +1,193 @@
+---
+title: SHAP - a reliable way to analyze model interpretability
+tags:
+- deep-learning
+- interpretability
+- algorithm
+---
+
+SHAP is the most popular model-agnostic technique that is used to explain predictions. SHAP stands for **SH**apley **A**dditive ex**P**lanations
+
+Shapely values are obtained by incorporating concepts from *Cooperative Game Theory* and *local explanations*
+
+# Mathematical and Algorithm Foundation
+
+## Shapely Values
+
+Shapely values were from game theory and invented by Lloyd Shapley. Shapely values were invented to be a way of providing a fair solution to the following question:
+
+> [!question]
+> If we have a coalition **C** that collaborates to produce a value **V**: How much did each individual member contribute to the final value
+
+The method here we assess each individual member’s contribution is to removing each member to get a new coalition and then compare their production, like this graphs:
+
+
+
+And then, we get every member 1 included or not included coalitions like this:
+
+
+
+Using left value - right value, we can get difference like image left above; And then we calculate the mean of them:
+
+$$
+\varphi_i=\frac{1}{\text{Members}}\sum_{\forall \text{C s.t. i}\notin \text{C}} \frac{\text{Marginal Contribution of i to C}}{\text{Coalitions of size |C|}}
+$$
+
+## Shapely Additive Explanations
+
+We need to know what’s **additive** mean here. Lundberg and Lee define an additive feature attribution as follows:
+
+
+
+
+
+$x'$, the simplified local inputs usually means that we turn a feature vector into a discrete binary vector, where features are either included or excluded. Also, the $g(x')$ should take this form:
+
+$$
+g(x')=\varphi_0+\sum_{i=1}^N \varphi_i {x'}_i
+$$
+
+* $\varphi_0$ is the **null output** of this model, that is, the **average output** of this model
+- $\varphi_i$ is **feature affect**, is how much that feature changes the output of the model, introduced above. It’s called **attribution**
+
+
+
+Now Lundberg and Lee go on to describe a set of three desirable properties of such an additive feature method, **local accuracy**, **missingness**, and **consistency**.
+
+### Local accuracy
+
+$$
+g(x')\approx f(x) \quad \text{if} \quad x'\approx x
+$$
+
+### Missingness
+
+$$
+{x_i}' = 0 \rightarrow \varphi_i = 0
+$$
+
+if a feature excluded from the model. it’s attribution must be zero; that is, the only thing that can affect the output of the explanation model is the inclusion of features, not the exclusion.
+
+### Consistency
+
+If feature contribution changes, the feature effect cannot change in the opposite direction
+
+# Why SHAP
+
+Lee and Lundberg in their paper argue that only SHAP satisfies all three properties if **the feature attributions in only additive explanatory model are specifically chosen to be the shapley values of those features**
+
+# SHAP, step-by-step Process, same as shap.explainer
+
+For example, we consider a ice cream shop in the airport, it has four features we can know to predict his business.
+
+$$
+\begin{bmatrix}
+\text{temperature} & \text{day of weeks} & \text{num of flights} & \text{num of hours}
+\end{bmatrix}
+\\
+\rightarrow \\
+\begin{bmatrix}
+T & D & F & H
+\end{bmatrix}
+$$
+
+For, example, we want to know the temperature 80 in sample [80 1 100 4] shapley value, here’s the step
+
+- Step 1. Get random permutation of features, and give a bracket to the feature we care and everything in its right. (manually)
+
+$$
+\begin{bmatrix}
+F & D & \underbrace{T \quad H}
+\end{bmatrix}
+$$
+
+- Step 2. Pick random sample from dataset
+
+For example, [200 5 70 8], form: [F D T H]
+
+- Step 3. Form vectors $x_1 \quad x_2$
+
+$$
+x_1=[100 \quad 1 \quad 80 \quad \color{#BF40BF} 8 \color{#FFFFFF}]
+$$
+
+$x_1$ is partially from original sample and partially from the random chosen one, the feature in bracket will from random chosen one, exclude what we care
+
+$$
+x_2 = [100 \quad 1 \quad \color{#BF40BF} 70 \quad 8 \color{#FFFFFF}]
+$$
+
+$x_2$ just change the feature we care into the same as random chosen one’s feature value
+
+Then, calculate the diff and record
+
+$$
+DIFF = c_1 - c_2
+$$
+
+- Step 4. Record the diff & return to step 1. and repeat many times
+
+$$
+\text{SHAP}(T=80 | [80 \quad 1 \quad 100 \quad 4]) = \text{average(DIFF)}
+$$
+
+# Shapley kernel
+
+## Too many coalitions need to be sampled
+
+Like we introduce shapley values above, for each $\varphi_i$ we need to sample a lot of coalitions to compute the difference.
+
+For 4 features, we need 64 total coalitions to sample; For 32 features, we need 17.1 billion coalitions to sample.
+
+It’s entirely untenable.
+
+So, to get over this difficulty, we need devise a **shapley kernel**, and that’s how the Lee and Lundberg do
+
+
+
+## Detail
+
+
+Though most of ML models won’t just let you omit a feature, what we do is define a **background dataset** B, one that contains a set of representative data points that model was trained over. We then filled in out omitted feature of features with values from background dataset, while holding the features are included in the permutation fixed to their original values. We then take the average of the model output over all of these new synthetic data point as our model output for that feature permutation which we call $\bar{y}$.
+
+$$
+E[y_{\text{12i4}}\ \ \forall \ \text{i}\in B] = \bar{y}_{\text{124}}
+$$
+
+
+Them we have a number of samples computed in this way,like image in left.
+
+We can formulate this as a weighted linear regression, with each feature assigned a coefficient.
+
+And we can prove that, in the special choice, the coefficient can be the shaplely values. **This weighting scheme is the basis of the Shapley Kernal.** In this situation, the weighted linear regression process as a whole is Kernal SHAP.
+
+### Different types of SHAP
+
+- **Kernal SHAP**
+- Low-order SHAP
+- Linear SHAP
+- Max SHAP
+- Deep SHAP
+- Tree SHAP
+
+
+
+### You need to notice
+We can see that, we calculate shapley values using linear regression lastly. So there must be the error here, but some python packages can not give us the error bound, so it’s confusion to konw if this error come from linear regression or the data, or the model.
+
+
+# Reference
+
+[Shapley Additive Explanations (SHAP)](https://www.youtube.com/watch?v=VB9uV-x0gtg)
+
+[SHAP: A reliable way to analyze your model interpretability](https://towardsdatascience.com/shap-a-reliable-way-to-analyze-your-model-interpretability-874294d30af6)
+
+[【Python可解释机器学习库SHAP】:Python的可解释机器学习库SHAP](https://zhuanlan.zhihu.com/p/483622352)
+
+[Shapley Values : Data Science Concepts](https://www.youtube.com/watch?v=NBg7YirBTN8)
+
+# Appendix
+
+Other methods to interprete model:
+
+[Papers with Code - SHAP Explained](https://paperswithcode.com/method/shap)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165406.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165406.png
new file mode 100644
index 000000000..cfebf84bd
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165406.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165429.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165429.png
new file mode 100644
index 000000000..cfebf84bd
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165429.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165523.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165523.png
new file mode 100644
index 000000000..805c00c13
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165523.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165623.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165623.png
new file mode 100644
index 000000000..d838834b3
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165623.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165818.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165818.png
new file mode 100644
index 000000000..bfc00cad3
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165818.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165840.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165840.png
new file mode 100644
index 000000000..c47074c02
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329165840.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329181956.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329181956.png
new file mode 100644
index 000000000..a7bb26baa
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329181956.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329182011.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329182011.png
new file mode 100644
index 000000000..1766fbd88
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329182011.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329205039.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329205039.png
new file mode 100644
index 000000000..d9c5c634a
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329205039.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329205130.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329205130.png
new file mode 100644
index 000000000..5c9f8b6a7
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Model_interpretability/attachments/Pasted image 20230329205130.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/NLP/basic/tokenization.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/NLP/basic/tokenization.md
new file mode 100644
index 000000000..86a3b2111
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/NLP/basic/tokenization.md
@@ -0,0 +1,9 @@
+---
+title: Tokenization
+tags:
+- NLP
+- deep-learning
+- tokenization
+- basic
+---
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/DTW.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/DTW.md
new file mode 100644
index 000000000..2f073d4c2
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/DTW.md
@@ -0,0 +1,58 @@
+---
+title: Dynamic Time Warping (DTW)
+tags:
+- metrics
+- time-series-dealing
+- evalution
+---
+
+
+
+欧氏距离在时间序列之间可能是一个不好的选择,因为时间轴上存在扭曲的情况。DTW 是一个考虑到这种扭曲的,测量距离来比较两个时间序列的一个指标,本section讲解如何计算 DTW distance
+
+# Detail
+
+
+## Step 1. 准备输入序列
+
+假设两个time series, A & B
+
+## Step 2. 计算距离矩阵
+
+创建一个距离矩阵,其中的元素表示序列 A 和序列 B 中每个时间点之间的距离。常见的距离度量方法包括欧氏距离、曼哈顿距离、余弦相似度等。根据你的数据类型和需求选择适当的距离度量方法。
+
+## Step 3. 初始化累积距离矩阵
+
+创建一个与距离矩阵大小相同的累积距离矩阵,用于存储从起点到每个位置的累积距离。将起点 (0, 0) 的累积距离设为距离矩阵的起始点距离。
+
+## Step 4. 计算累积距离
+
+从起点开始,按照动态规划的方式计算累积距离矩阵中每个位置的累积距离。对于每个位置 (i, j),**累积距离等于该位置的距离加上三个相邻位置中选择最小累积距离的值。**
+
+$$
+DTW(i, j) = d_{i,j} + \min{\{DTW(i-1,j), DTW(i, j-1), DTW(i-1, j-1)\}}
+$$
+
+
+## Step 5. 回溯最优路径
+
+从累积距离矩阵的最右下角开始,根据最小累积距离的路径回溯到起点 (0, 0)。记录下经过的路径,即为最优路径。
+
+## Step 6. 计算最终距离
+
+根据最优路径上的累积距离,计算出最终的 DTW 距离。
+
+# Example
+
+
+
+左边是距离矩阵,右边是DTW矩阵,也就是累积距离矩阵
+
+
+
+
+
+通过回溯,找到optimal warping path,DTW distance就是 the optimal warping path的square root,本例中就是$\sqrt{15}$
+
+
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230522151015.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230522151015.png
new file mode 100644
index 000000000..5fe2d911f
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230522151015.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526164724.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526164724.png
new file mode 100644
index 000000000..75965a471
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526164724.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526170120.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526170120.png
new file mode 100644
index 000000000..f053f0852
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526170120.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526170921.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526170921.png
new file mode 100644
index 000000000..4549c820f
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526170921.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526171119.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526171119.png
new file mode 100644
index 000000000..41b8120bc
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/attachments/Pasted image 20230526171119.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md
new file mode 100644
index 000000000..b879383af
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/Trick/quantile_loss.md
@@ -0,0 +1,63 @@
+---
+title: Quantile loss
+tags:
+- loss-function
+- deep-learning
+- deep-learning-math
+---
+
+在大多数现实世界的预测问题中,我们的预测所带来的不确定性具有重要价值。相较于仅仅提供点估计,了解预测范围能够显著改善许多商业应用的决策过程。**Quantile loss**就是为例帮助我们了解预测范围的loss function。
+
+Quantile loss用于衡量预测分布和目标分布之间的差异,特别适用于处理不确定性较高的预测问题。
+
+# What is quantile
+
+[Quantile](Math/Statistics/Basic/Quantile.md)
+
+# What is a prediction interval
+
+
+预测区间是对预测的不确定性进行量化的一种方法。它为结果变量的估计提供了**概率上限和下限的范围**。
+
+
+
+输出本身是随机变量,因此具有分布特性。预测区间的目的在于了解结果的正确性可能性。
+
+# What is Quantile Loss
+
+在Quantile loss中,我们将预测结果和目标值都表示为分位数形式,例如,我们可以用预测的α分位数来表示预测结果,用真实值的α分位数来表示目标值。然后,Quantile loss衡量了这两个分布之间的差异,通常使用分位数损失函数来计算。
+
+分位数回归损失函数(Quantile Regression Loss)用于预测分位数(Quantile)。例如,对于分位数为0.9的预测,应该在90%的情况下做出过高的预测。
+
+对于一条数据,prediction是$y_i^p$,真实值是$y_i$,mean regression loss for a quantile q:
+
+$$
+L(y_i^p, y_i) = \max[q(y_i^p - y_i), (q-1)(y_i^p - y_i)]
+$$
+
+一系列prediction数据来通过minimize这个loss function后,得到quantile - $q$
+
+
+## Intuitive Understanding
+
+在上述的回归损失方程中,由于 q 的取值范围在 0 到 1 之间,当进行过高预测($y_i^p$ > $y_i$)时,第一项将为正并占主导地位;而当进行过低预测($y_i^p$ < $y_i$)时,第二项将占主导地位。当 q 等于 0.5 时,过低预测和过高预测将受到相同的惩罚因子,从而得到中位数。q 的值越大,相比于过低预测,过高预测将受到更严厉的惩罚。例如,当 q 等于 0.75 时,过高预测将受到 0.75 的惩罚因子,而过低预测将受到 0.25 的惩罚因子。模型做出过高预测的可能性的*难度*将会是过低预测可能性的3倍,从而得到 0.75 分位数。
+
+## Why Quantile loss
+
+> [!quote]
+> **“同方差性”,“恒定方差假设”**
+>
+> 在最小二乘回归中,预测区间基于一个假设,即残差在自变量的各个取值上具有恒定的方差。这假设被称为“同方差性”或“恒定方差假设”。
+>
+> 这个假设是基于对回归模型中误差项的性质的一种合理假设。在最小二乘回归中,我们假设因变量的观测值是由真实值和一个误差项组成的,而这个误差项是独立同分布的,即在每个自变量取值上都具有相同的分布。
+>
+> 如果残差在自变量的各个取值上具有恒定的方差,意味着误差的大小不会随着自变量的变化而发生显著的变化。这样的话,我们可以使用统计方法来计算出预测区间,这个区间能够给出对未来观测值的置信度。
+>
+> 然而,如果恒定方差假设不成立,也就是残差在自变量的取值上具有不同的方差,那么最小二乘回归的结果可能会出现问题。在这种情况下,预测区间可能会低估或高估预测的不确定性,导致对未来观测值的置信度估计不准确。
+
+Quantile Loss Regression可以提供合理的预测区间,即使对于具有非恒定方差或非正态分布的残差也是如此
+
+
+# Reference
+
+* [Kandi, Shabeel. “Prediction Intervals in Forecasting: Quantile Loss Function.” _Analytics Vidhya_, 24 Apr. 2023, https://medium.com/analytics-vidhya/prediction-intervals-in-forecasting-quantile-loss-function-18f72501586f.](https://medium.com/analytics-vidhya/prediction-intervals-in-forecasting-quantile-loss-function-18f72501586f)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/application/color8bit_style.py b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/application/color8bit_style.py
new file mode 100644
index 000000000..e27529a1d
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/application/color8bit_style.py
@@ -0,0 +1,109 @@
+import cv2
+import numpy as np
+import matplotlib.pyplot as plt
+from tkinter import Tk, filedialog
+from mpl_toolkits.mplot3d import Axes3D
+from sklearn.cluster import KMeans
+
+
+# Create a Tkinter root window
+root = Tk()
+root.withdraw()
+
+# Open a file explorer dialog to select an image file
+file_path = filedialog.askopenfilename()
+
+# Read the selected image using cv2
+image = cv2.imread(file_path)
+
+# Convert the image to RGB color space
+image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
+
+# Get the dimensions of the image
+height, width, _ = image_rgb.shape
+
+# Reshape the image to a 2D array of pixels, one is pixel number, one is pixel channel
+pixels = image_rgb.reshape((height * width, 3))
+
+# Create an empty dataset
+dataset = []
+
+# Iterate over each pixel and store the RGB values as a vector in the dataset
+for pixel in pixels:
+ dataset.append(pixel)
+
+# Convert the dataset to a NumPy array
+dataset = np.array(dataset)
+
+# Get the RGB values from the dataset
+red = dataset[:, 0]
+green = dataset[:, 1]
+blue = dataset[:, 2]
+
+
+
+# plot show
+'''
+# Plot the histograms
+plt.figure(figsize=(10, 6))
+plt.hist(red, bins=256, color='red', alpha=0.5, label='Red')
+plt.hist(green, bins=256, color='green', alpha=0.5, label='Green')
+plt.hist(blue, bins=256, color='blue', alpha=0.5, label='Blue')
+plt.title('RGB Value Histogram')
+plt.xlabel('RGB Value')
+plt.ylabel('Frequency')
+plt.legend()
+plt.show()
+
+
+# Plot the 3D scatter graph
+fig = plt.figure(figsize=(10, 8))
+ax = fig.add_subplot(111, projection='3d')
+ax.scatter(red, green, blue, c='#000000', s=1)
+ax.set_xlabel('Red')
+ax.set_ylabel('Green')
+ax.set_zlabel('Blue')
+ax.set_title('RGB Scatter Plot')
+plt.show()
+'''
+
+
+# Perform k-means clustering
+num_clusters = 3 # Specify the desired number of clusters
+kmeans = KMeans(n_clusters=num_clusters, n_init='auto', random_state=42)
+labels = kmeans.fit_predict(dataset)
+
+
+# Show K-means Clustering result
+'''
+# Plot the scatter plot for each iteration of the k-means algorithm
+fig = plt.figure(figsize=(10, 8))
+ax = fig.add_subplot(111, projection='3d')
+
+for i in range(num_clusters):
+ cluster_points = dataset[labels == i]
+ ax.scatter(cluster_points[:, 0], cluster_points[:, 1], cluster_points[:, 2], s=1)
+
+ax.set_xlabel('Red')
+ax.set_ylabel('Green')
+ax.set_zlabel('Blue')
+ax.set_title('RGB Scatter Plot - K-Means Clustering')
+plt.show()
+'''
+
+center_values = kmeans.cluster_centers_.astype(int)
+
+for i in range(num_clusters):
+ dataset[labels == i] = center_values[i]
+
+
+# Reshape the pixels array back into an image with the original dimensions and convert it to BGR color space
+reshaped_image = dataset.reshape((height, width, 3))
+reshaped_image_bgr = cv2.cvtColor(reshaped_image.astype(np.uint8), cv2.COLOR_RGB2BGR)
+
+# Display the image using matplotlib
+plt.imshow(reshaped_image)
+plt.show()
+
+# Opencv store image
+cv2.imwrite('C:/Users/BME51/Desktop/color8bit_style.jpg', reshaped_image_bgr)
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/application/example.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/application/example.png
new file mode 100644
index 000000000..ff3c7cb91
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/application/example.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg
new file mode 100644
index 000000000..0fda57126
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/attachments/3ed5fee41bd566be093bebd62a33d12.jpg differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/attachments/k4XcapI.gif b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/attachments/k4XcapI.gif
new file mode 100644
index 000000000..ce5544e15
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/attachments/k4XcapI.gif differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/k_means.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/k_means.md
new file mode 100644
index 000000000..227d09cd6
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/clustering/k-means/k_means.md
@@ -0,0 +1,102 @@
+---
+title: K-means Clustering Algorithm
+tags:
+- machine-learning
+- clustering
+- algorithm
+---
+
+# Step by Step
+
+Our algorithm works as follows, assuming we have inputs $x_1, x_2, \cdots, x_n$ and value of $K$
+
+- **Step 1** - Pick $K$ random points as cluster centers called centroids.
+- **Step 2** - Assign each $x_i$ to nearest cluster by calculating its distance to each centroid.
+- **Step 3** - Find new cluster center by taking the average of the assigned points.
+- **Step 4** - Repeat Step 2 and 3 until none of the cluster assignments change.
+
+
+
+# Implementation
+
+## Core code
+
+### Distance calculation:
+
+```python
+# Euclidean Distance Caculator
+def dist(a, b, ax=1):
+ return np.linalg.norm(a - b, axis=ax)
+```
+
+
+### Generate Random Clustering center at first
+
+```python
+# Number of clusters
+k = 3
+# X coordinates of random centroids
+C_x = np.random.randint(0, np.max(X)-20, size=k)
+# Y coordinates of random centroids
+C_y = np.random.randint(0, np.max(X)-20, size=k)
+C = np.array(list(zip(C_x, C_y)), dtype=np.float32)
+print(C)
+```
+
+### Calculate dis and tag point, then update every tag's new center
+
+```python
+# To store the value of centroids when it updates
+C_old = np.zeros(C.shape)
+# Cluster Lables(0, 1, 2)
+clusters = np.zeros(len(X))
+# Error func. - Distance between new centroids and old centroids
+error = dist(C, C_old, None)
+# Loop will run till the error becomes zero
+while error != 0:
+ # Assigning each value to its closest cluster
+ for i in range(len(X)):
+ distances = dist(X[i], C)
+ cluster = np.argmin(distances)
+ clusters[i] = cluster
+ # Storing the old centroid values
+ C_old = deepcopy(C)
+ # Finding the new centroids by taking the average value
+ for i in range(k):
+ points = [X[j] for j in range(len(X)) if clusters[j] == i]
+ C[i] = np.mean(points, axis=0)
+ error = dist(C, C_old, None)
+```
+
+## Simple approach by scikit-learn
+
+```python
+from sklearn.cluster import KMeans
+
+# Number of clusters
+kmeans = KMeans(n_clusters=3)
+# Fitting the input data
+kmeans = kmeans.fit(X)
+# Getting the cluster labels
+labels = kmeans.predict(X)
+# Centroid values
+centroids = kmeans.cluster_centers_
+
+# Comparing with scikit-learn centroids
+print(C) # From Scratch
+print(centroids) # From sci-kit learn
+```
+
+# Application
+
+## 8bit style
+
+Read image and use k-means to do clustering for pixel value. Make pic to 8bit color style.
+
+
+
+[color8bit_style.py](https://github.com/PinkR1ver/Jude.W-s-Knowledge-Brain/blob/master/Deep_Learning_And_Machine_Learning/clustering/k-means/application/color8bit_style.py)
+
+# Reference
+
+* [K-Means Clustering in Python, https://mubaris.com/posts/kmeans-clustering/. Accessed 3 July 2023.](https://mubaris.com/posts/kmeans-clustering/)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/AdaBoost.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/AdaBoost.md
new file mode 100644
index 000000000..e18a979a5
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/AdaBoost.md
@@ -0,0 +1,38 @@
+---
+title: AdaBoost
+tags:
+- deep-learning
+- ensemble-learning
+---
+
+# Video you need to watch first
+
+* [AdaBoost, Clearly Explained](https://www.youtube.com/watch?v=LsK-xG1cLYA)
+
+# Key words and equation
+
+- **Stump(树桩) means classification just by one feature**
+- Amount of say
+
+$$
+\text{Amout of say} = \frac{1}{2}\log{(\frac{1-\text{Total Error}}{\text{Total Error}})}
+$$
+
+- Wrong Classified Sample New Weight
+
+$$
+\text{New Sample Weight} = \text{Sample Weight}\times e^{\text{amount of say}}
+$$
+
+- Correct Clasified Sample New Weight
+
+$$
+\text{New Sample Weight} = \text{Sample Weight}\times e^{-\text{amount of say}}
+$$
+
+- After reassing sample weight, do bootstrap sample based on their new weight, it will select big weight sample lots of times to adjust next model
+- In last prediction, the **amount of say** decide which results we will pick.
+
+# Question
+
+- **[why decision stumps instead of trees?](https://stats.stackexchange.com/questions/520667/adaboost-why-decision-stumps-instead-of-trees)**
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Decision_Tree.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Decision_Tree.md
new file mode 100644
index 000000000..f03a2f805
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Decision_Tree.md
@@ -0,0 +1,11 @@
+---
+title: Decision Tree
+tags:
+- machine-learning
+---
+
+Only vedio here:
+
+* [Decision and Classification Trees, Clearly Explained!!!](https://www.youtube.com/watch?v=_L39rN6gz7Y&t=229s "Decision and Classification Trees, Clearly Explained!!!")
+* [Regression Trees, Clearly Explained!!!](https://www.youtube.com/watch?v=g9c66TUylZ4&t=789s "Regression Trees, Clearly Explained!!!")
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Deep_Neural_Decision_Forests.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Deep_Neural_Decision_Forests.md
new file mode 100644
index 000000000..2c7cd3919
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Deep_Neural_Decision_Forests.md
@@ -0,0 +1,37 @@
+---
+title: Deep Neural Decision Forests
+tags:
+- deep-learning
+---
+
+# Background
+
+* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/Decision_Tree.md)
+* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/Random_Forest.md)
+
+# What is Deep Neural Decision Forests
+
+
+
+Deep Neural Decision Forests(dNDFs)是Neural Networks和Random Forest的结合,但是它更倾向于Neural Networks。它本质上是Nerual Networks incorporate Random Forest来提高NN的效率和准确度,训练方法和NN一致。
+
+dNDFs与NN的不同在output layer层发生变化,不单纯使用FC层输出,而是使用随机森林作为最后一层的分类器,相当于通过前面系统输出的data representation用随机森林作为分类器分类。**同时,通过将传统随机森林的local optimize改造成通过back propagation进行global optimize,随机森林的参数训练可以与前端的深度学习网络进行无缝衔接。**
+
+> [!attention]
+> The method is different from random forest in the sense that it uses a principled, joint and global optimization of split and leaf node parameters and from conventional deep networks because a decision forest provides the final predictions
+
+# Math in Neural Decision Forests
+
+Decision Tree model要是stochastic的,为了让它differentiable,让后面可以通过back-propagation训练。在传统的decision tree模型中,从node到leaf的路径是由decision function确定的,而在这个模型中,我们将用two sets of probabilities去决定final output。
+
+1. Probability of an observation reaching to a leaf . These basically are associated with decision node/split node which decides whether an observation goes left or right
+2. Once an observation reaches a leaf node, probability that it takes a specific class
+
+
+
+# Reference
+
+* [Deep Neural Decision Forests - YouTube Vedio by Venkatesh Bingi](https://www.youtube.com/watch?v=Uaimgqv75dY)
+* [Deep Neural Decision Forests - Medium by Gurparkash Singh Sohi](https://blog.goodaudience.com/deep-neural-decision-forests-b1dd39c4c6ce)
+* [Deep neural decision forest in keras - Medium by Kushal Mukherjee](https://kushalmukherjee.medium.com/deep-neural-decision-forest-in-keras-60134d270bfe)
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/GRU.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/GRU.md
new file mode 100644
index 000000000..44d842603
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/GRU.md
@@ -0,0 +1,7 @@
+---
+title: Gated Recurrent Unit
+tags:
+- deep-learning
+- time-series-dealing
+---
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/LSTM.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/LSTM.md
new file mode 100644
index 000000000..8f3780639
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/LSTM.md
@@ -0,0 +1,157 @@
+---
+title: Long Short-Term Memory Networks
+tags:
+- deep-learning
+- time-series-dealing
+- basic
+---
+
+> [!quote]
+> When I was learning LSTM, the new deep learning block *Transformers* dominate the NLP field. However, *Transformers* don't decisively outperform LSTMS in time-series-related tasks. The main reason is that LSTMs are more adept at handling **local temporal data**.
+
+
+LSTM的设计目标是解决传统RNN面临的长期依赖问题。传统RNN在处理长序列时,难以记住远距离的信息,因为随着时间的推移,梯度在传播过程中逐渐消失或爆炸。这使得传统RNN难以捕捉长期依赖关系,例如在自然语言处理中理解长句子的语义。
+
+LSTM通过使用一种称为门控机制的技术,有效地解决了这个问题。它包含一个称为记忆单元的重要组件,这个单元可以选择性地存储、读取和删除信息。LSTM的关键在于其三个门控单元:输入门、遗忘门和输出门。
+
+1. 输入门(Input Gate):决定哪些信息将被更新到记忆单元中。它使用一个Sigmoid激活函数来控制输入的重要性。
+
+2. 遗忘门(Forget Gate):决定哪些信息将被从记忆单元中删除。通过使用另一个Sigmoid激活函数和一个逐元素的乘法操作,它决定了上一个记忆状态中的哪些信息保留下来。
+
+3. 输出门(Output Gate):决定将哪些信息从记忆单元输出到下一个时间步。这个输出经过一个Sigmoid激活函数和一个Tanh激活函数来进行处理。
+
+
+这些门控单元允许LSTM选择性地记住或忘记特定的信息,从而使其能够有效地处理长序列。LSTM的网络结构使得信息可以在时间上流动,同时保留对过去信息的长期记忆。
+
+# Arch
+
+可以通过比较传统RNN模块和LSTM模块来加深记忆
+
+传统RNN网络:
+
+
+
+
+LSTM模块:
+
+
+
+
+## Core idea
+
+LSTM的core idea是cell state, cell state可以被视为一个横贯整个LSTM网络的内部记忆。它类似于传统RNN中的隐藏状态,但相比之下,cell state的设计更加精细,使得LSTM能够更好地捕捉长期依赖关系。
+
+
+
+cell state的更新是通过门控单元来控制的。在LSTM中,输入门、遗忘门和输出门共同决定了如何更新细胞状态。
+
+
+## Step-by-Step LSTM Walk Through
+
+### Step 1 - Throw away information
+
+LSTM第一步是throw away information,通过遗忘门(forget gate layer)。
+
+
+
+forget gate layer 通过输入$x_t$和$h_{t-1}$,计算出$f_t$,$f_t$范围在(0,1),这个$f_t$会去乘以cell state $C_{t-1}$。1代表着“completely keep”,0代表着“completely get rid of this”
+
+一个好的例子,在nlp中,cell state可能包括当前主体的性别,以便可以使用正确的代词。 当我们看到一个新主题时,我们想忘记旧主题的性别。
+
+### Step 2 - Decide What information we're going to store
+
+LSTM第二步在于决定哪些信息要被store在cell state里,这里有两个部分,第一个部分是通过"input gate layer"(输入门),计算$i_t$。第二个部分通过一个tanh layer来计算新候选值的向量 $\tilde{C}_t$。这两个部分将会用来update information in cell state
+
+
+
+
+
+### Step 3 - Decide output
+
+
+
+最终的输出回事一个filtered version of cell state,计算如上图。
+
+# Variants on LSTM
+
+LSTM有很多变种,这里有列出来一些
+
+## Adding "peephole connections"
+
+
+
+
+在gate layer的输入中加入cell state,你可以选择在这三个门里的某些加入“peephole connection”(窥视孔连接),某些不加入。
+
+加入窥视孔连接的目的是增强LSTM对细胞状态的建模能力,并更好地捕捉序列中的长期依赖关系。
+
+## Use coupled forget and input gates
+
+
+
+
+## GRU (Gated Recurrent Unit) ⭐⭐⭐
+
+* [GRU](computer_sci/deep_learning_and_machine_learning/deep_learning/GRU.md)
+
+
+
+GRU是著名的LSTM变种,值得另起炉灶介绍
+
+
+# Demo code & Pytorch version LSTM graph explain
+
+
+
+```python
+import torch
+import torch.nn as nn
+import numpy as np
+import matplotlib.pyplot as plt
+
+class LSTM(nn.Module):
+ def __init__(self, input_size, output_size, hidden_size, num_layers):
+ super(LSTM, self).__init__()
+ self.input_size = input_size
+ self.output_size = output_size
+ self.hidden_size = hidden_size
+ self.num_layers = num_layers
+
+ self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
+
+ self.fc = nn.Linear(hidden_size, output_size)
+
+ def forward(self, input_seq):
+ # input_seq: (seq_len, batch, input_size)
+ # lstm_out: (seq_len, batch, hidden_size)
+
+ lstm_out, (hidden_state, cell_state) = self.lstm(input_seq)
+
+ lstm_out = self.fc(lstm_out)
+
+ return lstm_out, hidden_state, cell_state
+
+
+if __name__ == '__main__':
+ seq = np.linspace(0, 3801, 3801)
+ h = torch.randn(1, 1, 64)
+ c = torch.randn(1, 1, 64)
+
+ lstm = LSTM(1, 1, 64, 1)
+
+ input = torch.Tensor(seq).view(len(seq), 1, -1)
+
+ lstm_out, hidden_state, cell_state = lstm(input)
+ lstm_out = torch.squeeze(lstm_out)
+
+ print(lstm_out.shape)
+ print(hidden_state.shape)
+ print(cell_state.shape)
+```
+
+# Reference
+
+* _Understanding LSTM Networks -- Colah’s Blog_. https://colah.github.io/posts/2015-08-Understanding-LSTMs/. Accessed 22 May 2023.
+* Hochreiter, Sepp, and Jürgen Schmidhuber. “Long Short-Term Memory.” _Neural Computation_, vol. 9, no. 8, Nov. 1997, pp. 1735–80. _DOI.org (Crossref)_, https://doi.org/10.1162/neco.1997.9.8.1735.
+* _Recurrent Nets That Time and Count_. https://ieeexplore.ieee.org/document/861302/. Accessed 22 May 2023.
+*
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Random_Forest.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Random_Forest.md
new file mode 100644
index 000000000..a6e39ac50
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Random_Forest.md
@@ -0,0 +1,16 @@
+---
+title: Random Forest
+tags:
+- machine-learning
+---
+
+# Background
+
+* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/Decision_Tree.md)
+
+# Detail
+
+only vedio here:
+
+* [StatQuest: Random Forests Part 1 - Building, Using and Evaluating](https://www.youtube.com/watch?v=J4Wdy0Wc_xQ&t=32s "StatQuest: Random Forests Part 1 - Building, Using and Evaluating")
+
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Transformer.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Transformer.md
new file mode 100644
index 000000000..7bf36149a
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/Transformer.md
@@ -0,0 +1,20 @@
+---
+title: "Transformer"
+tags:
+- deep-learning
+- attention
+---
+
+> [!info]
+> 在学习Transformer前,你需要学习 [⭐Attention](computer_sci/deep_learning_and_machine_learning/deep_learning/⭐Attention.md)
+
+
+
+Transformer 是Seq2Seq model,由Encoder和Decoder组成
+
+
+# Encoder
+这里贴的是原文Encoder的架构
+
+
+
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md
new file mode 100644
index 000000000..da50be841
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/XGBoost.md
@@ -0,0 +1,147 @@
+---
+title: XGBoost
+tags:
+- deep-learning
+- ensemble-learning
+---
+
+
+XGBoost is an open-source software library that implements optimized distributed gradient boosting machine learning algorithms under the **Gradient Boosting** framework.
+
+# What you need to know first
+
+* [🚧🚧AdaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/AdaBoost.md)
+
+# What is XGBoost
+
+**XGBoost**, which stands for Extreme Gradient Boosting, is a scalable, distributed **gradient-boosted** decision tree (GBDT) machine learning library. It provides parallel tree boosting and is the leading machine learning library for regression, classification, and ranking problems.
+
+It’s vital to an understanding of XGBoost to first grasp the machine learning concepts and algorithms that XGBoost builds upon: **supervised machine learning**, **decision trees**, **ensemble learning**, and **gradient boosting**.
+
+Here, we need to know **ensemble learning** and **gradient boosting,** this two thing I don’t konw before.
+
+## What is Ensemble Learning(集成学习)
+
+**Ensemble learning** is a general meta approach to machine learning that **seeks better predictive performance by combining the predictions from multiple models**.
+
+The three main classes of ensemble learning methods are **bagging**, **stacking**, and **boosting.**
+
+### Bagging
+
+Bagging means **Bootstrap aggregation.** It’s an ****ensemble learning method that seeks a diverse group of ensemble members by **varying the training data**.
+
+This typically involves using a single machine learning algorithm, almost always an unpruned decision tree, and **training each model on a different sample of the same training dataset.** The predictions made by the ensemble members are then **combined using simple statistics, such as voting or averaging.**
+
+Key to the method is the manner in which each sample of the dataset is prepared to train ensemble members. Each model gets its own unique sample of the dataset.
+
+Bagging adopts the **bootstrap distribution** for generating **different base learners**. In other words, it applies **bootstrap sampling** to obtain the data subsets for training the base learners.
+
+
+
+
+
+Key word of bagging method:
+
+- **Bootstrap Sampling**
+- **Voting or averaging of predictions**
+- **Unpruned decision tree**
+
+> Random forest is the typical example based on the bagging method.
+>
+
+### Stacking
+
+Stacking means **Stacked Generalization**. It is an ensemble method that seeks a diverse group of members by **varying the model types** fit on the training data and using a model to combine predictions.
+
+> *Stacking is a general procedure where a learner is trained to combine the individual learners. Here, the individual learners are called the first-level learners, while the combiner is called the second-level learner, or meta-learner.*
+>
+
+Stacking has its own nomenclature where ensemble members are referred to as **level-0 models** and the model that is used to combine the predictions is referred to as a **level-1 model**.
+
+The two-level hierarchy of models is the most common approach, although more layers of models can be used. For example, instead of a single level-1 model, we might have 3 or 5 level-1 models and a single level-2 model that combines the predictions of level-1 models in order to make a prediction.
+
+
+
+Key words of stacknig method:
+
+- **Unchanged training dataset**
+- **Different machine learning algorithms for each ensemble member**
+- **Machine learning model to learn how to best combine predictions**
+
+### Boosting
+
+**Boosting** is an ensemble method that seeks to change the training data to focus attention on examples that previous fit models on the training dataset have gotten wrong.
+
+> *In boosting, […] the training dataset for each subsequent classifier increasingly focuses on instances misclassified by previously generated classifiers.*
+>
+
+The key property of boosting ensembles is the idea of **correcting prediction errors**. The models are fit and added to the ensemble sequentially such that the second model attempts to correct the predictions of the first model, the third corrects the second model, and so on.
+
+This typically involves the use of very simple decision trees that only make a single or a few decisions, referred to in boosting as weak learners. The predictions of the weak learners are combined using simple voting or averaging, although **the contributions are weighed proportional to their performance or capability**. The objective is to develop a so-called “***strong-learner***” from many purpose-built “***weak-learners***”.
+
+Typically, the training **dataset is left unchanged** and instead, the learning algorithm is modified to **pay more or less attention to specific samples based on whether they have been predicted correctly or incorrectly** by previously added ensemble members.
+
+
+
+Key words to boosting method:
+
+- **Bias training data** toward those examples that are hard to predict
+- **Iteratively add ensemble members to correct predictions of prior models**
+- Combine predictions **using a weighted average** of models
+
+
+
+Type of boosting:
+
+- Adaptive boosting
+- Gradient boosting
+- Extreme gradient boosting
+
+# Introduction to three main type of boosting method
+
+## [Adaptive boosting](https://www.notion.so/AdaBoost-8e7009e35aee4334b31d46bfd7e3dbba)
+
+Adaptive Boosting (AdaBoost) was one of **the earliest boosting models** developed. It adapts and tries to **self-correct** in every iteration of the boosting process.
+
+AdaBoost initially gives the same weight to each dataset. Then, it automatically adjusts the weights of the data points after every decision tree. It **gives more weight to incorrectly classified items** to correct them for the next round. It repeats the process until the residual error, or the difference between actual and predicted values, falls below an acceptable threshold.
+
+You can use AdaBoost with many predictors, and it is typically not as sensitive as other boosting algorithms. This approach does not work well when there is a correlation among features or high data dimensionality. Overall, **AdaBoost is a suitable type of boosting for classification problems**.
+
+**Must check Learning material below to know more detail of this algorithm. 🚧🚧🚧**
+
+## Gradient boosting
+
+Gradient Boosting (GB) is similar to AdaBoost in that it, too, is a **sequential training technique**. The difference between AdaBoost and GB is that GB does not give incorrectly classified items more weight. Instead, GB software **optimizes the loss function by generating base learners sequentially** so that **the present base learner is always more effective than the previous one**. This method **attempts to generate accurate results initially instead of correcting errors throughout the process**, like AdaBoost. For this reason, GB software can lead to more accurate results. Gradient Boosting can help with both classification and regression-based problems.
+
+
+
+## Extreme gradient boosting
+
+Extreme Gradient Boosting (XGBoost) improves gradient boosting for **computational speed and scale** in several ways. XGBoost uses multiple cores on the CPU so that learning can occur in parallel during training. It is a boosting algorithm that can handle extensive datasets, making it attractive for big data applications. The key features of XGBoost are parallelization, distributed computing, cache optimization, and out-of-core processing.
+
+# Reference
+
+## XGBoost
+
+* [What is XGBoost?](https://www.nvidia.com/en-us/glossary/data-science/xgboost/)
+
+* [XGBoost Part 1 (of 4): Regression](https://www.youtube.com/watch?v=OtD8wVaFm6E)
+
+## Ensemble Learning
+
+* [A Gentle Introduction to Ensemble Learning Algorithms - MachineLearningMastery.com](https://machinelearningmastery.com/tour-of-ensemble-learning-algorithms/)
+
+* [集成学习(ensemble learning)原理详解_Soyoger的博客-CSDN博客_ensemble l](https://blog.csdn.net/qq_36330643/article/details/77621232)
+
+* [What is Boosting? Guide to Boosting in Machine Learning - AWS](https://aws.amazon.com/what-is/boosting/)
+
+* [Regression Trees, Clearly Explained!!!](https://www.youtube.com/watch?v=g9c66TUylZ4&list=PLblh5JKOoLUICTaGLRoHQDuF_7q2GfuJF&index=45)
+
+* [AdaBoost, Clearly Explained](https://www.youtube.com/watch?v=LsK-xG1cLYA)
+
+* [Gradient Boost Part 1 (of 4): Regression Main Ideas](https://www.youtube.com/watch?v=3CC4N4z3GJc)
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/1.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/1.png
new file mode 100644
index 000000000..6ef2272ec
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/1.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315195603.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315195603.png
new file mode 100644
index 000000000..0ac2c5c47
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315195603.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315200009.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315200009.png
new file mode 100644
index 000000000..c3c719f18
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315200009.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315201906.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315201906.png
new file mode 100644
index 000000000..723b873ae
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315201906.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315202047.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315202047.png
new file mode 100644
index 000000000..0d4315fff
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315202047.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315202314.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315202314.png
new file mode 100644
index 000000000..9f999a893
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315202314.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205148.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205148.png
new file mode 100644
index 000000000..26a0c7413
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205148.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205727.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205727.png
new file mode 100644
index 000000000..7fd1cfc36
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205727.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205918.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205918.png
new file mode 100644
index 000000000..060f2ab48
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315205918.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210032.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210032.png
new file mode 100644
index 000000000..aeee247dc
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210032.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210631.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210631.png
new file mode 100644
index 000000000..9a4d96e17
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210631.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210640.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210640.png
new file mode 100644
index 000000000..7e9ff069f
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210640.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210704.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210704.png
new file mode 100644
index 000000000..7e9ff069f
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230315210704.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316160103.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316160103.png
new file mode 100644
index 000000000..0bc6d8945
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316160103.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316162635.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316162635.png
new file mode 100644
index 000000000..2d752de3a
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316162635.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316162642.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316162642.png
new file mode 100644
index 000000000..9a5f64da6
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230316162642.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230413112821.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230413112821.png
new file mode 100644
index 000000000..980f2e231
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230413112821.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230413112822.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230413112822.png
new file mode 100644
index 000000000..980f2e231
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230413112822.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161052.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161052.png
new file mode 100644
index 000000000..6698796f9
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161052.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161520.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161520.png
new file mode 100644
index 000000000..1c90ae8aa
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161520.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161546.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161546.png
new file mode 100644
index 000000000..5bd046c5a
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522161546.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162225.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162225.png
new file mode 100644
index 000000000..44ffe784a
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162225.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162523.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162523.png
new file mode 100644
index 000000000..93cd8fe8e
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162523.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162536.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162536.png
new file mode 100644
index 000000000..dc8b24ddd
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522162536.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522163338.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522163338.png
new file mode 100644
index 000000000..e013f3f0a
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522163338.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522163353.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522163353.png
new file mode 100644
index 000000000..9a1cc09f4
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522163353.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164229.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164229.png
new file mode 100644
index 000000000..d4f8b78c6
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164229.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164237.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164237.png
new file mode 100644
index 000000000..c57aa16f9
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164237.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164557.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164557.png
new file mode 100644
index 000000000..337bbb502
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164557.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164609.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164609.png
new file mode 100644
index 000000000..7020464f9
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522164609.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522165102.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522165102.png
new file mode 100644
index 000000000..d3a5521d1
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522165102.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522165117.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522165117.png
new file mode 100644
index 000000000..0b123faae
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522165117.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522170059.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522170059.png
new file mode 100644
index 000000000..4ee289abc
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522170059.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522170214.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522170214.png
new file mode 100644
index 000000000..ad7079675
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230522170214.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230523164806.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230523164806.png
new file mode 100644
index 000000000..13f0df2ea
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Pasted image 20230523164806.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 1.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 1.png
new file mode 100644
index 000000000..59f3674d5
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 1.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 2.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 2.png
new file mode 100644
index 000000000..fdd4bafa4
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 2.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 3.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 3.png
new file mode 100644
index 000000000..8281c60aa
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 3.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 4.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 4.png
new file mode 100644
index 000000000..23849514b
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled 4.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled.png
new file mode 100644
index 000000000..d72e23841
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/attachments/Untitled.png differ
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md
new file mode 100644
index 000000000..834433205
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/deep_learning_MOC.md
@@ -0,0 +1,36 @@
+---
+title: Deep Learning MOC
+tags:
+ - Catalog
+ - MOC
+---
+
+
+# Attention is all you need
+
+* [[computer_sci/deep_learning_and_machine_learning/deep_learning/⭐Attention|Attention Blocker]]
+* [[computer_sci/deep_learning_and_machine_learning/deep_learning/Transformer|Transformer]]
+
+
+# Tree-like architecture
+
+* [Decision Tree](computer_sci/deep_learning_and_machine_learning/deep_learning/Decision_Tree.md)
+* [Random Forest](computer_sci/deep_learning_and_machine_learning/deep_learning/Random_Forest.md)
+* [Deep Neural Decision Forests](computer_sci/deep_learning_and_machine_learning/deep_learning/Deep_Neural_Decision_Forests.md)
+* [XGBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md)
+
+
+# Ensemble Learning
+
+* [AdaBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/AdaBoost.md)
+* [XGBoost](computer_sci/deep_learning_and_machine_learning/deep_learning/XGBoost.md)
+
+
+# Time-series dealing block
+
+* [LSTM](computer_sci/deep_learning_and_machine_learning/deep_learning/LSTM.md)
+
+# Clustering Algorithm
+
+
+* [K-means Clustering Algorithm](computer_sci/deep_learning_and_machine_learning/clustering/k-means/k_means.md)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/⭐Attention.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/⭐Attention.md
new file mode 100644
index 000000000..70f0d2a99
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/deep_learning/⭐Attention.md
@@ -0,0 +1,132 @@
+---
+title: "⭐Attenion"
+tags:
+- deep-learning
+- attention
+---
+# Self-Attention
+
+讲述self-attention我们以*sequence labeling*任务作为任务来讲解,sequence labeling的任务是输入N个vector并且输出N个label。
+
+典型的例子有输入一个句子,分析每个词汇的词性是什么,比如句子“I saw a saw”,这个句子里saw和saw的词性分别是verb和nonu,如果我们用fully-connected(FC)层来做的话,那么面对同样的输入saw,我们无法得出不同的结果。
+
+
+
+我们的做法可以是对输入加窗,考虑周边邻近的词汇信息,这与信号处理常用的方法类似,但是窗的长度是有限且固定的,而seq的长度是变化的,因此我们在面对这种任务的时候,我们可以借助**self-attention**层。
+
+## Detail
+
+
+
+对于Self-attention层,生成的$b^i$向量是考虑到所有输入$\sum_i\alpha^i$向量
+
+### Vector Relevance
+
+
+
+
+* *Step 1.* 使用Dot-product 去计算 vector relevance
+
+
+
+* *Step 2.* Normalizing计算出来的vector relevance
+
+
+* *Step 3.* 根据vector relevance,也就是attention scores计算最后的输出。这是一个Reweighting Process,一个extract information based on attention scores
+
+
+
+> [!hint]
+> 从上面的过程中,可以看出,$b^i$互相之间的计算没有关系,具有很好的并行性
+
+### Matrix Detail
+
+$$
+q^i = W^q \alpha^i
+$$
+
+
+$$
+Q = [q^1 \quad q^2 \quad \cdots \quad q^N],\ \ I = [\alpha^1 \quad \alpha^2 \quad \cdots \quad \alpha^N]
+$$
+
+
+
+So,
+
+$$
+Q = W^q I
+$$
+
+As same,
+$$
+K = W^k I,\quad V = W^v I
+$$
+Calculate attention score $\alpha$,
+$$
+\begin{bmatrix}
+\alpha_{1,1} \\
+\alpha_{1,2} \\
+\cdots \\
+\alpha_{1,N}
+\end{bmatrix} =
+\begin{bmatrix}
+k^1 \\
+k^2 \\
+\cdots \\
+k_N
+\end{bmatrix} q^1
+$$
+
+So,
+$$
+A=\begin{bmatrix}
+\alpha_{1,1} & \alpha_{2,1} & \cdots & \alpha_{N,1} \\
+\alpha_{1,2} & \alpha_{2,2} & \cdots & \alpha_{N,2} \\
+\vdots & \vdots & \ddots & \vdots\\
+\alpha_{1,N} & \alpha_{2,N} & \cdots & \alpha_{N,N}
+\end{bmatrix} =
+\begin{bmatrix}
+k^1 \\
+k^2 \\
+\cdots \\
+k_N
+\end{bmatrix} [q^1 \quad q^2 \quad \cdots \quad q^N] = K^TQ
+$$
+
+$$
+A' = \text{Softmax}(A)
+$$
+
+Finally, calculate output $b$
+
+$$
+O = [b^1 \quad b^2 \quad \cdots \quad b^N] = [v^1 \quad v^2 \quad \cdots \quad v^N] = VA'
+$$
+
+
+
+### Positional Encoding
+
+* Each position has a unique positional vector $e^i$
+ * hand-crafted
+ * learned from data
+
+## Fun Facts
+
+### Self-attention vs. CNN
+
+
+
+因为transformer有着更大的function set,所以需求更多的数据; 
+
+### Self-attention vs. RNN
+
+目前,RNN的角色正在被self-attention替代,RNN在long seq的情况下,前面的信息会被逐渐遗忘;同时**RNN没有并行性**
+同样,Self attention有着比RNN更大的function set,在某些情况下,self-attention可以变成RNN
+
+# Multi-head Self-attention
+Multi-head self attention就是由不同的self attention layer在一起,有不同的$W^q$,$W^k$来负责不同种类的relevance
+
+
+
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/MOC.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/MOC.md
new file mode 100644
index 000000000..49436a120
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/MOC.md
@@ -0,0 +1,7 @@
+---
+title: Machine Learning MOC
+tags:
+ - MOC
+ - machine-learning
+---
+* [SVM](computer_sci/deep_learning_and_machine_learning/machine_learning/SVM.md)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/SVM.md b/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/SVM.md
new file mode 100644
index 000000000..6aead8781
--- /dev/null
+++ b/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/SVM.md
@@ -0,0 +1,47 @@
+---
+title: Support Vector Machine
+tags:
+ - machine-learning
+---
+
+# Overview
+
+
+
+# Hyper Parameters
+
+## Kernel Function
+
+* Linear
+* Polynomial
+* RBF
+ * $\gamma$: The gamma parameter **defines the influence of each training example on the decision boundary**. A higher gamma value gives more weight to the closer points, while a lower value allows points further away to have a significant impact. Higher values of gamma can lead to overfitting, especially in datasets with noise.
+## C Parameter
+
+The C parameter, also known as the regularization parameter, controls the trade-off between maximizing the margin and minimizing the classification error. **A smaller C value allows for a larger margin but may lead to misclassification of some training examples, while a larger C value focuses on classifying all training examples correctly but might result in a narrower margin**
+## [Training Method](https://wadhwatanya1234.medium.com/multi-class-classification-one-vs-all-one-vs-one-993dd23ae7ca)
+
+* One-vs-All
+* One-vs-One
+# Detail
+
+## Score Function
+
+$$
+f(x) = \sum_i \alpha_i y_i G(x, x_i) + bias
+$$
+* $\alpha_i$ is corresponding support vector weight
+* $y_i$ is corresponding support vector tags
+* $G(x,x_i)$ is kernel function about input sample $x$ and support vector $x_i$
+* $bias$ is bias
+## Decision Function
+
+$$
+Decision \ Function = sign(f(x))
+$$
+We determine the sample's category by checking its decision function's sign.
+# Reference
+
+* [“华为开发者论坛.” _Huawei_, https://developer.huawei.com/consumer/cn/forum/topic/41598169. Accessed 4 Sept. 2023.](https://developer.huawei.com/consumer/cn/forum/topic/41598169)
+* [Multi-class Classification — One-vs-All & One-vs-One](https://wadhwatanya1234.medium.com/multi-class-classification-one-vs-all-one-vs-one-993dd23ae7ca)
+* [Saini, Anshul. “Guide on Support Vector Machine (SVM) Algorithm.” _Analytics Vidhya_, 12 Oct. 2021, https://www.analyticsvidhya.com/blog/2021/10/support-vector-machinessvm-a-complete-guide-for-beginners/.](https://www.analyticsvidhya.com/blog/2021/10/support-vector-machinessvm-a-complete-guide-for-beginners/)
\ No newline at end of file
diff --git a/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/attachments/Pasted image 20230904225904.png b/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/attachments/Pasted image 20230904225904.png
new file mode 100644
index 000000000..114e20539
Binary files /dev/null and b/content/computer_sci/Deep_Learning_And_Machine_Learning/machine_learning/attachments/Pasted image 20230904225904.png differ
diff --git a/content/computer_sci/Hardware/Hardware_MOC.md b/content/computer_sci/Hardware/Hardware_MOC.md
new file mode 100644
index 000000000..6acd3e421
--- /dev/null
+++ b/content/computer_sci/Hardware/Hardware_MOC.md
@@ -0,0 +1,13 @@
+---
+title: Hardware - MOC
+tags:
+- MOC
+- hardware
+- chip
+---
+
+# Microcontroller unit (MCU)
+
+## Basic concepts
+
+* [Different programming interfaces](computer_sci/Hardware/MCU/Different%20programming%20interfaces.md)
diff --git a/content/computer_sci/Hardware/MCU/Different programming interfaces.md b/content/computer_sci/Hardware/MCU/Different programming interfaces.md
new file mode 100644
index 000000000..0d3573082
--- /dev/null
+++ b/content/computer_sci/Hardware/MCU/Different programming interfaces.md
@@ -0,0 +1,46 @@
+# What is programming interfaces in MCU
+
+A **programming interface** is a device that allows a programmer to connect to a microcontroller (MCU) and program it. The programming interface is used to load the program into the MCU’s memory and debug it.
+
+# Different types of programming interfaces in MCU
+Chipmakers have different names for programming interfaces that all basically do the same thing:
+- ISP - programming interface for Atmel (now Microchip) AVRs. SPI-like (MISO, MOSI, SCK, reset). It can be used for flash programming and debugging.
+- PDI - newer programming interface for Atmel AVRs (eg. Xmega). Uses two wires (data and clock). Can do the same as ISP.
+- DebugWire - yet another interface from Atmel (this one uses only a single wire)
+- ICSP - programming interface for Microchip PIC line of MCUs
+- SWD - Serial Wire Debug - programming interface for MCUs with ARM Cortex-M cores (uses two wires - data and clock)
+- JTAG - very generic term, SPI-like interface used for [boundary scan](https://en.wikipedia.org/wiki/Boundary_scan), can also be used for programming/debugging MCUs (almost every vendor has its own protocol, so Cortex-M JTAG is not the same as AVR JTAG or Blackfin JTAG)
+- Spy-Bi-Wire - yet another two wire programming interface, this one is for TI's MSP430 MCUs
+
+## SWD 和 JTAG的区别
+
+目前在使用的st link可以使用SWD和JTAG这两种debugger去调试stm32,所以这两种方式的区别令人比较在意;
+* JTAG(Joint Test Action Group,联合测试行动小组)是一种国际标准测试协议,主要用于芯片内部测试。现在多数的高级器件都支持JTAG协议,如ARM、DSP、FPGA器件等。JTAG调试接口必须使用VCC、GND电源信号,以及TMS、TCK、TDI、TDO四根调试信号,可选TRST、RESET复位信号和RTCK(同步时钟)信号。
+ * TMS(Test Mode Select):模式选择,TMS用来设置JTAG接口处于某种特定的测试模式;
+ * TCK(Test Clock):时钟输入;
+ * TDI(Test Data Input):数据输入,数据通过TDI引脚输入JTAG接口;
+ * TDO(Test Data Output):数据输出,数据通过TDO引脚从JTAG接口输出;
+* 串行调试(Serial Wire Debug),是一种和JTAG不同的调试模式,使用的调试协议也不一样,所以最直接的体现在调试接口上,与JTAG的20个引脚相比,SWD只需要4个(或者5个)引脚,结构简单,但是使用范围没有JTAG广泛,主流调试器上也是后来才加的SWD调试模式。
+ * SWDIO:串行数据输入输出,作为仿真信号的双向数据信号线,建议上拉;
+ * SWCLK:串行时钟输入,作为仿真信号的时钟信号线,建议下拉;
+ * SWO:串行数据输出引脚,CPU调试接口可通过SWO引脚输出一些调试信息。该引脚是可选的;
+ * RESET:仿真器输出至目标CPU的系统复位信号,该引脚也为可选
+
+* SWD模式比JTAG在高速模式下面更加可靠。在大数据量的情况下面JTAG下载程序会失败,但是SWD发生的几率会小很多。*基本使用JTAG仿真模式的情况下是可以直接使用SWD模式的,只要你的仿真器支持。*
+* 在GPIO刚好缺一个的时候,可以使用SWD仿真,这种模式支持更少的引脚。
+
+
+* 同时JTAG调试版本不同的情况下:
+ * JTAGV6 需要的硬件接口为: GND, RST, SWDIO, SWDCLK;
+ * JTAGV7 需要的硬件接口为: GND, RST, SWDIO, SWDCLK,相对V6, 其速度有了明显的提高,速度是 JTAGV6 的 6 倍。
+ * JTAGV8 需要的硬件接口为: VCC, GND, RST, SWDIO, SWDCLK,速度可以到 10M。
+
+
+
+# Reference
+
+[JTAG, SWD, EDBG, ICSP, ISP terms - Electrical Engineering Stack Exchange](https://electronics.stackexchange.com/questions/412029/jtag-swd-edbg-icsp-isp-terms)
+
+[jtag和swd的区别_jtag和swd区别_耶稣赞我萌的博客-CSDN博客](https://blog.csdn.net/yym6789/article/details/88721409)
+
+[STM32的JTAG和SWD模式_学术马的博客-CSDN博客](https://blog.csdn.net/w1050321758/article/details/108663603)
diff --git a/content/computer_sci/code_frame_learn/MOC.md b/content/computer_sci/code_frame_learn/MOC.md
new file mode 100644
index 000000000..65d513e43
--- /dev/null
+++ b/content/computer_sci/code_frame_learn/MOC.md
@@ -0,0 +1,10 @@
+---
+title: Code Framework Learn
+tags:
+ - web
+ - code_tool
+---
+
+# Web Framework
+
+* [Flask](computer_sci/code_frame_learn/flask/MOC.md)
diff --git a/content/computer_sci/code_frame_learn/flask/MOC.md b/content/computer_sci/code_frame_learn/flask/MOC.md
new file mode 100644
index 000000000..9bf07fe6b
--- /dev/null
+++ b/content/computer_sci/code_frame_learn/flask/MOC.md
@@ -0,0 +1,3 @@
+---
+title: Flask - MOC
+---
diff --git a/content/computer_sci/coding_knowledge/coding_lang_MOC.md b/content/computer_sci/coding_knowledge/coding_lang_MOC.md
new file mode 100644
index 000000000..21b2edf02
--- /dev/null
+++ b/content/computer_sci/coding_knowledge/coding_lang_MOC.md
@@ -0,0 +1,18 @@
+---
+title: About coding language design detail
+tags:
+- basic
+- coding-language
+- MOC
+---
+
+# Python
+
+[Why python doesn't need pointer?](computer_sci/coding_knowledge/python/python_doesnt_need_pointer.md)
+
+# C
+
+# MATLAB
+
+# JavaScript
+
diff --git a/content/computer_sci/coding_knowledge/python/matplotlib_backend.md b/content/computer_sci/coding_knowledge/python/matplotlib_backend.md
new file mode 100644
index 000000000..165d906b5
--- /dev/null
+++ b/content/computer_sci/coding_knowledge/python/matplotlib_backend.md
@@ -0,0 +1,8 @@
+---
+title: Matplotlib Backend Review
+tags:
+- python
+- code
+- matplotlib
+---
+
diff --git a/content/computer_sci/coding_knowledge/python/python_doesnt_need_pointer.md b/content/computer_sci/coding_knowledge/python/python_doesnt_need_pointer.md
new file mode 100644
index 000000000..f92d8cab2
--- /dev/null
+++ b/content/computer_sci/coding_knowledge/python/python_doesnt_need_pointer.md
@@ -0,0 +1,116 @@
+---
+title: Why python doesn't need pointer?
+tags:
+- python
+- coding-language
+- basic
+---
+
+
+Python doesn't require the explicit use of pointers like C because of its **underlying memory management** and **object model**.
+
+# Design Concept
+
+## Underlying memory management
+
+In Python, variables are *references to objects rather than memory addresses* like pointers in C. When you assign a value to a variable in Python, you are actually creating a reference to an object in memory. This reference allows you to access and manipulate the object, but you don't need to manage memory explicitly.
+
+Python uses automatic memory management through a mechanism called **garbage collection**. It keeps track of objects in memory and automatically deallocates memory for objects that are no longer referenced or used. **This automatic memory management frees developers from the responsibility of explicitly allocating and deallocating memory using pointers.**
+
+## Object model
+
+
+For python, everything is object, objects contains at least three pieces of data:
+
+* Reference count
+* Type
+* Value
+
+Reference count is an interesting concept in python, it's designed for memory management.
+
+> [!help]
+> Reference count refers to the number of references an object. Each object in Python contains a reference count, which is a count of how many references or variables are currently pointing to that object.
+>
+> The reference count mechanism is part of Python's memory management system. When an object is created or assigned to a variable, the reference count of that object is incremented. When a reference to an object is deleted or reassigned, the reference count is decremented. When the reference count of an object reaches zero, it means that there are no more references to that object, and the memory occupied by the object can be deallocated.
+>
+> The reference count mechanism help python build its **garbage collection** memory management.
+
+Python's object model also provides additional benefits. For example, *objects in Python can have different types, and variables can be dynamically reassigned to different objects of different types without any explicit type declarations or memory management.*
+
+# Example
+
+Generating by [ChatGPT](https://chat.openai.com/):
+
+**Example about swap**
+
+---
+Code:
+
+```python
+def swap_variables(a, b):
+ return b, a
+
+# Example usage
+x = 10
+y = 20
+
+x, y = swap_variables(x, y)
+print("x =", x)
+print("y =", y)
+```
+
+
+Certainly! Let's go through the example step by step to understand how memory management and the reference system work in Python when swapping two variables.
+
+Step 1: Variable assignment
+```python
+x = 10
+y = 20
+```
+In this step, two variables `x` and `y` are created and assigned the values `10` and `20`, respectively. At this point, **the reference count for the objects `10` and `20` is 1** each because both `x` and `y` reference these objects.
+
+Step 2: Calling the `swap_variables` function
+```python
+x, y = swap_variables(x, y)
+```
+Here, the `swap_variables` function is called with the current values of `x` and `y` as arguments. The function returns a tuple containing the swapped values, which are then assigned to `x` and `y`. At this stage, the memory and reference count changes as follows:
+
+- The `swap_variables` function receives the values of `x` and `y` (references to objects `10` and `20`) as arguments. *The function does not create any new objects or change the reference count*.
+- The `swap_variables` function returns a tuple with the swapped values `(20, 10)`. *This creates a new tuple object in memory and increments the reference count of objects `20` and `10` to 2 each* (one reference from the tuple, one reference from `x` and `y`).
+
+Step 3: Swapping the values
+```python
+x, y = 20, 10
+```
+The parallel assignment swaps the values of `x` and `y`. At this point, `x` refers to the object `20`, and `y` refers to the object `10`. The reference count of objects `10` and `20` remains at 2 each (one reference from the tuple, one reference from `x` and `y`).
+
+Step 4: Printing the swapped values
+```python
+print("x =", x)
+print("y =", y)
+```
+This step simply prints the values of `x` and `y`. The output will be:
+```
+x = 20
+y = 10
+```
+
+Step 5: Memory deallocation
+At the end of the program, Python's garbage collector runs periodically to identify and deallocate objects that are no longer referenced. In this example, the tuple created in the `swap_variables` function (`(20, 10)`) becomes eligible for garbage collection because it is no longer referenced. When the garbage collector runs, it will decrement the reference count of objects `10` and `20` to 1 each, and since the reference count reaches zero, the memory occupied by the tuple and the objects will be deallocated.
+
+In summary, the Python memory management and reference system handles the creation, reference counting, and deallocation of objects automatically. The variables `x` and `y` act as references to the objects `10` and `20`, and swapping their values using the parallel assignment does not involve explicit memory management or pointer manipulation.
+
+---
+
+## Trick
+
+> [!quote]
+>You can use `getrefcount` to get reference count of the object. *The count returned is generally one higher than you might expect, because it includes the (temporary) reference as an argument to* `getrefcount()`.
+
+# Conclusion
+
+Overall, Python's design philosophy aims to prioritize simplicity and readability while abstracting away low-level memory management concerns, making it easier and more convenient to work with compared to languages like C that require explicit pointer manipulation.
+
+# Reference
+
+* ChatGPT - Prompt: Why python don't need pointer?
\ No newline at end of file
diff --git a/content/computer_sci/computational_geometry/MOC.md b/content/computer_sci/computational_geometry/MOC.md
new file mode 100644
index 000000000..e4d352a82
--- /dev/null
+++ b/content/computer_sci/computational_geometry/MOC.md
@@ -0,0 +1,11 @@
+---
+title: Computational Geometry MOC
+tags:
+ - math
+ - MOC
+ - geometry
+---
+
+# 3D Geometry Algorithm
+
+* [Delaunay Triangulation](computer_sci/computational_geometry/delaunay_triangulation.md)
\ No newline at end of file
diff --git a/content/computer_sci/computational_geometry/delaunay_triangulation.md b/content/computer_sci/computational_geometry/delaunay_triangulation.md
new file mode 100644
index 000000000..0d0bce4c6
--- /dev/null
+++ b/content/computer_sci/computational_geometry/delaunay_triangulation.md
@@ -0,0 +1,14 @@
+---
+title: Delaunay Triangulation
+tags:
+ - math
+ - geometry
+---
+# What is Delaunay Triangulation?
+
+
+
+# Reference
+
+* [_Delaunay Triangulation (1/5) | Computational Geometry - Lecture 08_. _www.youtube.com_, https://www.youtube.com/watch?v=6UsdvbiJx54. Accessed 4 Sept. 2023.](https://www.youtube.com/watch?v=6UsdvbiJx54)
+* [_Delaunay Triangulation_. _www.youtube.com_, https://www.youtube.com/watch?v=GctAunEuHt4. Accessed 4 Sept. 2023.](https://www.youtube.com/watch?v=GctAunEuHt4)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/MOC.md b/content/computer_sci/data_structure_and_algorithm/MOC.md
new file mode 100644
index 000000000..2871248a9
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/MOC.md
@@ -0,0 +1,24 @@
+---
+title: Data Structure and Algorithm MOC
+tags:
+- MOC
+- algorithm
+- data-structure
+---
+
+# Tree-like Structure
+
+* [Fenwick Tree](computer_sci/data_structure_and_algorithm/tree/fenwick_tree.md)
+* [Segment Tree](computer_sci/data_structure_and_algorithm/tree/segment_tree.md)
+
+# Graph
+
+## Algorithm
+
+* [BFS](computer_sci/data_structure_and_algorithm/graph/BFS.md)
+* [Topological Sorting](computer_sci/data_structure_and_algorithm/graph/topological_sorting.md)
+* [Minimum Spanning Tree](computer_sci/data_structure_and_algorithm/graph/MST.md)
+
+## Type of graph
+
+* [Spanning Tree](computer_sci/data_structure_and_algorithm/graph/spanning_tree.md)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/BFS.md b/content/computer_sci/data_structure_and_algorithm/graph/BFS.md
new file mode 100644
index 000000000..67e1358cf
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/graph/BFS.md
@@ -0,0 +1,19 @@
+---
+title: Breadth First Search in Python
+tags:
+- data-structure
+- basic
+- algorithm
+---
+
+# Basic Concept
+
+
+
+# Code Implementation
+
+
+
+# Reference
+
+* [_Breadth First Search Algorithm Explained (With Example and Code)_. _www.youtube.com_, https://www.youtube.com/watch?v=YtD2KGRdn3s. Accessed 19 July 2023.](https://www.youtube.com/watch?v=YtD2KGRdn3s&t=2s)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/MST.md b/content/computer_sci/data_structure_and_algorithm/graph/MST.md
new file mode 100644
index 000000000..5a42decc3
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/graph/MST.md
@@ -0,0 +1,7 @@
+---
+title: Minimum Spanning Tree
+tags:
+ - data-structure
+ - graph
+---
+Not now...
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230914104155.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230914104155.png
new file mode 100644
index 000000000..d91cc8c3a
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230914104155.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111826.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111826.png
new file mode 100644
index 000000000..72ba4a9ae
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111826.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111850.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111850.png
new file mode 100644
index 000000000..d3ca71a25
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111850.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111856.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111856.png
new file mode 100644
index 000000000..f0839fdcb
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915111856.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915114014.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915114014.png
new file mode 100644
index 000000000..5992b8487
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915114014.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915114049.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915114049.png
new file mode 100644
index 000000000..77f0b509f
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230915114049.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230919101645.png b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230919101645.png
new file mode 100644
index 000000000..61bea066e
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/Pasted image 20230919101645.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/attachments/algo.gif b/content/computer_sci/data_structure_and_algorithm/graph/attachments/algo.gif
new file mode 100644
index 000000000..f3bc1c967
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/graph/attachments/algo.gif differ
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/spanning_tree.md b/content/computer_sci/data_structure_and_algorithm/graph/spanning_tree.md
new file mode 100644
index 000000000..9bea91f09
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/graph/spanning_tree.md
@@ -0,0 +1,56 @@
+---
+title: Spanning Tree
+tags:
+ - graph
+ - data-structure
+---
+
+# What is Spanning Tree?
+
+树上再加一条边使之存在环,就称为**基环树**
+
+Example:
+
+
+
+
+# Why do we need Spanning Tree
+
+* **Network design**: Spanning trees are used to create efficient and redundant networks, such as in Ethernet networks or telecommunications.
+* **Routing protocols**: Spanning trees are employed in protocols like Spanning Tree Protocol (STP) and Rapid Spanning Tree Protocol (RSTP) for loop prevention and redundancy in network switches.
+* [Minimum Spanning Tree (MST)](computer_sci/data_structure_and_algorithm/graph/MST.md): Spanning trees can be used to find the minimum-weighted spanning tree in a weighted graph. This is particularly useful in **optimizing costs in transportation networks or electrical power distribution grids**.
+* **Broadcast algorithms**: Spanning trees are used in broadcasting messages or data packets efficiently within a network, ensuring that each node receives the message exactly once.
+
+> [!summary]
+> Spanning trees provide **a simplified view of the graph**, which **eliminates unnecessary edges** while **preserving connectivity**. This simplification helps in various graph-related algorithms, network design, and optimization problems.
+
+# More about Spanning Tree
+
+## Inward Spanning Tree, 内向基环树
+
+一个基环树的拓展概念,没有在英文资料里查到很多相关资料,但是中文资料和chatgpt明白这个词,表达的概念一致
+
+
+
+内向基环树类似于基环树的结构,在有向图中,每个点有且只有一条出边,即**every node out-degree = 1**,*这也是内向的定义*。(”这个图会给人内向的感觉“)
+
+内向基环树的特点是可以通过BFS去检索所有indegree = 0的点直到环的位置,这样可以去检索基环树里的最长链。
+
+具体的代码可以见:
+
+[*Leet Code* - 2127 Maximum Employees to Be invited to a Meeting](https://github.com/PinkR1ver/JudeW-Problemset/blob/master/Leetcode/2127.%20Maximum%20Employees%20to%20Be%20Invited%20to%20a%20Meeting/main_bfs.py)
+
+
+## Outward Spanning Tree,外向基环树
+
+in-degree = 1,就会造成外向的感觉,如图:
+
+
+
+具体应用有待补充
+
+
+# Reference
+
+* [_$Note$-内向基环树 - AcWing_. https://www.acwing.com/blog/content/23513/. Accessed 15 Sept. 2023.](https://www.acwing.com/blog/content/23513/)
+* [_浅谈基环树(环套树) - Seaway-Fu - 博客园_. https://www.cnblogs.com/fusiwei/p/13815549.html. Accessed 19 Sept. 2023.](https://www.cnblogs.com/fusiwei/p/13815549.html)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/graph/topological_sorting.md b/content/computer_sci/data_structure_and_algorithm/graph/topological_sorting.md
new file mode 100644
index 000000000..66fff9d58
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/graph/topological_sorting.md
@@ -0,0 +1,83 @@
+---
+title: Topological Sorting
+tags:
+ - data-structure
+ - graph
+---
+# What is Topological Sorting
+
+**Topological Sorting**(拓扑排序) is designed for Directed Acyclic Graph(**DAG, 有向无环图**). Topological Sorting is a linear ordering of vertices such that for every directed edge u v, vertex u comes before v in the ordering.
+
+## Example
+
+
+
+Topological sorting can be more than one result. For this graph, "5 4 2 3 1 0" is one of the result. **The first vertex in topological sorting is always a vertex with an *in-degree of 0*.**
+
+# Algorithm to do Topological Sorting
+
+## DFS
+
+深度优先搜索以任意顺序循环遍历图中的每个节点,若搜索进行中碰到之前已经遇到的节点,或碰到叶节点,则中止算法。
+
+```fake
+L ← Empty list that will contain the sorted nodes
+while exists nodes without a permanent mark do
+ select an unmarked node n
+ visit(n)
+
+function visit(node n)
+ if n has a permanent mark then
+ return
+ if n has a temporary mark then
+ return error (graph not DAG)
+
+ mark n with a temporary mark
+
+ for each node m with a edge from n to m do
+ visit(m)
+
+ remove temporary mark from n
+ mark n with a permanent mark
+ add n to head of L
+```
+
+
+## Kahn's Algorithm
+
+ First, find a list of "start nodes" which have no incoming edges and insert them into a set S; *at least one such node must exist in a non-empty acyclic graph*
+
+```fake
+L ← Empty list that will contain the sorted elements
+S ← Set of all nodes with no incoming edge
+
+while S is not empty do
+
+ remove a node n from S
+ add n to L
+
+ for each node m with an edge e from n to m do
+ remove edge e from graph
+
+ if m has no other incoming edge then
+ insert m into S
+
+ if graph has edges then
+ return error (graph no DAG)
+ else
+ return L
+```
+
+
+
+
+
+# Topological Sorting Application
+
+* Task Priorities
+
+# Reference
+
+* [“Topological Sorting.” _GeeksforGeeks_, 12 May 2013, https://www.geeksforgeeks.org/topological-sorting/.](https://www.geeksforgeeks.org/topological-sorting/)
+* [“拓撲排序.” 维基百科,自由的百科全书, 22 May 2022. _Wikipedia_, https://zh.wikipedia.org/w/index.php?title=%E6%8B%93%E6%92%B2%E6%8E%92%E5%BA%8F&oldid=71758255.](https://zh.wikipedia.org/wiki/%E6%8B%93%E6%92%B2%E6%8E%92%E5%BA%8F)
+* [“算法 - 拓扑排序.” _Earth Guardian_, 22 Aug. 2018, http://redspider110.github.io/2018/08/22/0092-algorithms-topological-sorting/index.html.](https://redspider110.github.io/2018/08/22/0092-algorithms-topological-sorting/)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/string/KMP.md b/content/computer_sci/data_structure_and_algorithm/string/KMP.md
new file mode 100644
index 000000000..a90148d59
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/string/KMP.md
@@ -0,0 +1,95 @@
+---
+title: Knuth–Morris–Pratt algorithm
+tags:
+ - algorithm
+ - string
+ - string-search
+---
+
+# Abstract
+
+* Class —— String Search
+* Data Structure —— String
+* Worst-case performance —— $\Theta(m)$ preprocessing + $\Theta(n)$ matching
+* Worst-case space complexity —— $\Theta(m)$
+
+
+# Details
+
+## What's KMP do
+
+KMP是做**字符串匹配**最常用的算法之一。
+
+> [!abstract]
+> 什么是字符串匹配?
+>
+> 举例来说,有一个字符串"BBC ABCDAB ABCDABCDABDE",我想知道,里面是否包含另一个字符串"ABCDABD"?
+
+
+Knuth-Morris-Pratt算法是以三个发明者命名,起头的那个K就是著名科学家Donald Knuth。
+
+## Core
+
+> [!abstract]
+> KMP的算法的核心是利用已知匹配的结果构建**部分匹配表** (Partial Match Table)来进行算法加速
+
+"部分匹配"的实质是,有时候,字符串prefix和suffix会有重复。比如,"ABCDAB"之中有两个"AB",那么它的"部分匹配值"就是2("AB"的长度)。**搜索词移动的时候,第一个"AB"向后移动4位(字符串长度-部分匹配值),就可以来到第二个"AB"的位置。**
+
+> [!tip]
+> 以“ABCDABD”为例,
+>
+> "A"的前缀和后缀都为空集,共有元素的长度为0;
+>
+> "AB"的前缀为[A],后缀为[B],共有元素的长度为0;
+>
+> "ABC"的前缀为[A, AB],后缀为[BC, C],共有元素的长度0;
+>
+> "ABCD"的前缀为[A, AB, ABC],后缀为[BCD, CD, D],共有元素的长度为0;
+>
+> "ABCDA"的前缀为[A, AB, ABC, ABCD],后缀为[BCDA, CDA, DA, A],共有元素为"A",长度为1;
+>
+> "ABCDAB"的前缀为[A, AB, ABC, ABCD, ABCDA],后缀为[BCDAB, CDAB, DAB, AB, B],共有元素为"AB",长度为2;
+>
+> "ABCDABD"的前缀为[A, AB, ABC, ABCD, ABCDA, ABCDAB],后缀为[BCDABD, CDABD, DABD, ABD, BD, D],共有元素的长度为0。
+>
+
+
+KMP算法在发现不匹配后,移动的位数由**已匹配的字符数**和**对应的部分匹配值**决定
+
+$$
+ 移动位数 = 已匹配的字符数 - 对应的部分匹配值
+$$
+
+
+# Code
+
+## Partial Match Table
+
+```python
+ def partialMatchTable(self, pattern: str) -> list[int]:
+
+ table = [0] * len(pattern)
+
+ i = 1
+ j = 0
+
+ while i < len(pattern):
+
+ if pattern[i] == pattern[j]:
+ table[i] = j + 1
+ i += 1
+ j += 1
+
+ elif j > 0:
+ j = table[j - 1]
+
+ else:
+ i += 1
+
+ return table
+```
+
+# Reference
+
+* [阮一峰. “字符串匹配的KMP算法.” _字符串匹配的KMP算法_, 23 Jan. 2024, https://www.ruanyifeng.com/blog/2013/05/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm.html. 👈 ⭐⭐⭐!](https://www.ruanyifeng.com/blog/2013/05/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm.html)
+* [_The Knuth-Morris-Pratt Algorithm in My Own Words - jBoxer_. http://jakeboxer.com/blog/2009/12/13/the-knuth-morris-pratt-algorithm-in-my-own-words/. Accessed 23 Jan. 2024.](http://jakeboxer.com/blog/2009/12/13/the-knuth-morris-pratt-algorithm-in-my-own-words/)
diff --git a/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230710160348.png b/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230710160348.png
new file mode 100644
index 000000000..3998e96bb
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230710160348.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230907145346.png b/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230907145346.png
new file mode 100644
index 000000000..ebcd0431b
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230907145346.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230907170533.png b/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230907170533.png
new file mode 100644
index 000000000..4749e1c30
Binary files /dev/null and b/content/computer_sci/data_structure_and_algorithm/tree/attachments/Pasted image 20230907170533.png differ
diff --git a/content/computer_sci/data_structure_and_algorithm/tree/fenwick_tree.md b/content/computer_sci/data_structure_and_algorithm/tree/fenwick_tree.md
new file mode 100644
index 000000000..b19172fe2
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/tree/fenwick_tree.md
@@ -0,0 +1,66 @@
+---
+title:
+tags:
+- data-structure
+- basic
+- algorithm
+---
+
+
+
+
+**树状数组(Fenwick Tree)**,也被称为**二叉索引树(Binary Indexed Tree,BIT)**,其初衷是解决数据压缩里的累积频率(Cumulative Frequency)的计算问题,现多用于*高效计算数列的[前缀和](tmp_script/prefix_sum.md), 区间和*。它可以以$O(\log{n})$的时间得到任意前缀和,并同时支持在$O(\log{n})$时间内支持动态单点值的修改,空间复杂度为$O(n)$
+
+我们希望BIT可以完成的操作是:
+1. 更改存储在索引I处的值。(这称为**点更新**操作)
+2. 查找长度为k的前缀之和。(这称为**前缀**和**查询**)
+
+
+# Origin
+
+按照Peter M.Fenwick的说法,正如所有的整数都可以表示成2的幂和,我们也可以把一串序列表示成一系列子序列的的和。采用这个想法,我们可*将一个前缀和划分成多个子序列的和*,而划分的方法与数的2的幂和具有极其相似的方式。*一方面,子序列的个数是其二进制表示中1的个数,另一方面,子序列代表的f[i]的个数也是2的幂。*
+
+# Step by Step
+
+
+## `lowbit(x:int) -> int`
+
+该函数返回参数转为二进制后,最后一个1的位置所代表的数值,例如:
+
+```
+lowbit(34) -> 2
+lowbit(12) -> 4
+lowbit(8) -> 8
+```
+
+在coding时,可以使用位运算`(~i + 1) & i`来计算最后一位1的值,它的原理在于使得最小位数上的1在`~i + 1`和`i`上都为1,而其它位置上则不会同时为1,因此使用`and`运算可以得到最后一位1的值
+
+同时,具有trick的点在于,实际coding的时候,`lowbit`函数写为:
+
+```python
+def lowbit(x):
+ return x & (-x)
+```
+
+并不需要做+1操作,这是因为这是因为当我们对整数 `i` 取负数 `-i` 时,其二进制表示中只有最右边的 1 保持不变,而其余位都会取反。然后我们再将 `i` 与 `-i` 进行按位与操作 `&`,结果会保留 `i` 中最右边的 1,而将其他位都变为 0。这个技巧的原理基于补码表示法,在补码表示法中,*正整数的补码和其本身相同,负整数的补码是将其绝对值的二进制表示取反后加 1*。所以很多语言在coding的时候,`-i`所做操作就是`~i+1`
+
+`lowbit()`
+
+## Build Array `BIT` (**Binary Indexed Tree**)
+
+二叉索引树一般由数组实现
+
+在Fenwick Tree结构中,需要一个数组`BIT`来维护数组$A$的前缀和,有:
+$$
+{BIT}_i = \sum_{j=i-lowbit(i)+1}^{i} A_j
+$$
+code实现:
+
+```python
+
+```
+
+
+# Reference
+
+* [二叉索引树 | 三点水. https://lotabout.me/2018/binary-indexed-tree/. Accessed 11 July 2023.](https://lotabout.me/2018/binary-indexed-tree/)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/tree/segment_tree.md b/content/computer_sci/data_structure_and_algorithm/tree/segment_tree.md
new file mode 100644
index 000000000..ad2d705f2
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/tree/segment_tree.md
@@ -0,0 +1,163 @@
+---
+title: Segment Tree
+tags:
+ - data-structure
+ - tree
+---
+# Overview
+
+Segment Tree(**线段树**)是一种用于解决区间查询问题的数据结构。它可以**有效地处理包含大量区间操作的问题**,如*查询*区间最大值、最小值、*求和*、*更新*等。
+
+Segment Tree将给定的区间划分为若干个较小的子区间,并使用树进行表示。**每个节点表示一个子区间,树的根节点表示整个区间**。每个节点记录了对应子区间的一些统计信息,如该区间的最大值、最小值、总和等。
+
+构建Segment Tree的过程中,首先将问题规模不断缩小,将大的区间划分为两个较小的子区间,并依次递归构建每个子区间的节点。当区间缩小到长度为1时,即叶子节点,将问题的原始数据作为叶子节点的值。
+
+Segment Tree的构建完成后,可以高效地进行查询和更新操作。查询操作通过递归遍历树的节点,在给定的区间范围内查找所需的统计信息。更新操作通过递归更新树的节点,更新目标区间内的值,并更新父节点的统计信息。
+
+**由于Segment Tree的每个节点代表的区间是互不重叠的,因此在进行统计信息的查询和更新时,可以利用区间的性质进行剪枝操作,从而提高效率**。
+
+# Detail
+
+## Basic
+
+
+
+*Segment Tree* is a basically binary tree, we can represent segment tree in a simple linear array. We can learn segment tree by knowing some key points. We consider an array $A$ of size $N$ and a corresponding Segment Tree $T$.
+
+1. The root of $T$ will represent the whole array $A[0:N-1]$
+2. In each step, the segment is divided into half and the two children represent those two halves. $A[0:N-1]$ will be divided into $A[0, (N-1)/2]$ & $A[(N-1)/2 + 1, N-1]$
+3. **Height** of the segment tree will be $log_2{N}$. The **internal nodes** is $N-1$ and **leaves** are $N$. So a **total number of nodes** are $2 \times N - 1$.
+
+## Operations
+
+Once the Segment Tree is built, its structure cannot be changed. We can update the values of nodes but we cannot change its structure. Segment tree provides two operations:
+1. **Update**: To update the element of the array $A$ and reflect the corresponding change in the Segment tree.
+2. **Query**: In this operation we can **query on an interval or segment and return the answer to the problem** (say minimum/maximum/summation in the particular segment).
+
+## Time Complexity and Code Implementation Demo
+
+### Build
+
+
+
+
+```c
+void build(int node, int start, int end)
+{
+ if(start == end)
+ {
+ // Leaf node will have a single element
+ tree[node] = A[start];
+ }
+ else
+ {
+ int mid = (start + end) / 2;
+ // Recurse on the left child
+ build(2*node, start, mid);
+ // Recurse on the right child
+ build(2*node+1, mid+1, end);
+ // Internal node will have the sum of both of its children
+ tree[node] = tree[2*node] + tree[2*node+1];
+ }
+}
+```
+
+
+```python
+def segment_tree_build(nums):
+ n = len(nums)
+ tree = np.zeros(2 * n)
+
+ tree[n:2 * n] = nums
+
+ for i in range(n-1, 0, -1):
+ tree[i] = tree[2 * i] + tree[2 * i + 1]
+
+ return tree
+```
+
+
+**Every nodes means a sum of an interval**. Build Complexity is $O(N)$
+
+### Update
+
+```c
+void update(int node, int start, int end, int idx, int val)
+{
+ if(start == end)
+ {
+ // Leaf node
+ A[idx] += val;
+ tree[node] += val;
+ }
+ else
+ {
+ int mid = (start + end) / 2;
+ if(start <= idx and idx <= mid)
+ {
+ // If idx is in the left child, recurse on the left child
+ update(2*node, start, mid, idx, val);
+ }
+ else
+ {
+ // if idx is in the right child, recurse on the right child
+ update(2*node+1, mid+1, end, idx, val);
+ }
+ // Internal node will have the sum of both of its children
+ tree[node] = tree[2*node] + tree[2*node+1];
+ }
+}
+```
+
+
+To update an element, **look at the interval in which the element is present and recurse accordingly on the left or the right child**.
+
+Complexity of update will be $O(logN)$
+
+
+### Query
+
+```c
+int query(int node, int start, int end, int l, int r)
+{
+ if(r < start or end < l)
+ {
+ // range represented by a node is completely outside the given range
+ return 0;
+ }
+ if(l <= start and end <= r)
+ {
+ // range represented by a node is completely inside the given range
+ return tree[node];
+ }
+ // range represented by a node is partially inside and partially outside the given range
+ int mid = (start + end) / 2;
+ int p1 = query(2*node, start, mid, l, r);
+ int p2 = query(2*node+1, mid+1, end, l, r);
+ return (p1 + p2);
+}
+```
+
+将查询区间切割成多个区间在不同节点查找并合并
+## LazyTag Trick
+
+
+Lazy Tag的设计目的是为了[l, r]区间所有数增加k的情况,做多次update时间复杂度浪费过多,利用lazy tag降低时间复杂度。
+
+lazy tag的设计原理是,被打上lazy tag的seg node是已经更新完了的seg node,而lazy tag之下的seg node是没有更新的。只有要访问lazy tag之下的seg node的时候才去做更新,来节省更新。
+
+### Lazy Tag Propagation
+
+lazy propagation is a optimize technique in segment tree to **minimize** tons of operations.
+
+lazy propagation is hard to explain, so watch this tutorial vedio is a best way to learn and review.
+
+pls watch vedio in reference 3: [_Lazy Propagation Segment Tree_. _www.youtube.com_, https://www.youtube.com/watch?v=xuoQdt5pHj0. Accessed 12 Sept. 2023.](https://www.youtube.com/watch?v=xuoQdt5pHj0)
+
+
+
+# Reference
+
+* [“Segment Trees Tutorials & Notes | Data Structures.” _HackerEarth_, https://www.hackerearth.com/practice/data-structures/advanced-data-structures/segment-trees/tutorial/. Accessed 7 Sept. 2023.](https://www.hackerearth.com/practice/data-structures/advanced-data-structures/segment-trees/tutorial/)
+* [“力扣(LeetCode)官网 - 全球极客挚爱的技术成长平台.” _力扣 LeetCode_, https://leetcode.cn/problems/handling-sum-queries-after-update/solutions/2356392/geng-xin-shu-zu-hou-chu-li-qiu-he-cha-xu-kv6u/. Accessed 11 Sept. 2023.](https://leetcode.cn/problems/handling-sum-queries-after-update/solutions/2356392/geng-xin-shu-zu-hou-chu-li-qiu-he-cha-xu-kv6u/)
+* [_Lazy Propagation Segment Tree_. _www.youtube.com_, https://www.youtube.com/watch?v=xuoQdt5pHj0. Accessed 12 Sept. 2023.](https://www.youtube.com/watch?v=xuoQdt5pHj0)
\ No newline at end of file
diff --git a/content/computer_sci/data_structure_and_algorithm/two_pointers.md b/content/computer_sci/data_structure_and_algorithm/two_pointers.md
new file mode 100644
index 000000000..a59d57196
--- /dev/null
+++ b/content/computer_sci/data_structure_and_algorithm/two_pointers.md
@@ -0,0 +1,7 @@
+---
+title: Two Pointers
+tags:
+ - algorithm
+ - pointer
+---
+
diff --git a/content/data_sci/basic/attachments/uncorrelated-vs-independent.pdf b/content/data_sci/basic/attachments/uncorrelated-vs-independent.pdf
new file mode 100644
index 000000000..2e5bae63e
Binary files /dev/null and b/content/data_sci/basic/attachments/uncorrelated-vs-independent.pdf differ
diff --git a/content/data_sci/basic/relationship.md b/content/data_sci/basic/relationship.md
new file mode 100644
index 000000000..bb100e41d
--- /dev/null
+++ b/content/data_sci/basic/relationship.md
@@ -0,0 +1,66 @@
+---
+title: Independence & Correlation
+tags:
+ - math
+ - statistics
+---
+
+# Independent
+
+
+直白来说,独立(independence)指的是两个或多个变量之间的关系是否相互独立。如果两个变量是独立的,那么它们的取值不会相互影响。换句话说,一个变量的发生与其他变量的状态无关。例如,考虑一个骰子和一枚硬币,它们的投掷结果是独立的,因为你投掷硬币的结果不会影响骰子的结果。
+
+In mathematics,联合概率密度相当于分别的概率密度相乘
+
+$$
+p_{X,Y}(x,y) = p_X(x) * p_Y(y)
+$$
+
+$$
+\begin{equation}
+\begin{split} \rightarrow E[XY]
+& = \int\int xy p_{X,Y}(x,y)dxdy \\
+& =\int x p_X(x) \int y p_Y(y) \\
+& = E[X]E[Y]
+\end{split}
+\end{equation}
+$$
+
+# Correlation
+
+相关(correlation)指的是两个变量之间的关系是否存在关联。如果两个变量是相关的,那么它们的取值会彼此影响。当一个变量增加或减少时,另一个变量可能会相应地增加或减少。例如,考虑身高和体重之间的关系,一般来说,身高较高的人往往体重也较重。
+
+
+In mathematics, 一般使用correlation coefficient来判断二者的相关性,其表达式为:
+
+$$
+\rho(X, Y) = \frac{Cov[X,Y]}{\sqrt{Var[X] Var[Y]}}
+$$
+
+其中$Cov$ means Covariance,协方差:
+
+$$
+Cov[X, Y] = E[XY] - E[X]E[Y]
+$$
+所以不相关的表达为$\rho(X,Y) = 0$, 既$Cov[X,Y] = 0$, 也就是$E[XY] = E[X]E[Y]$
+
+# Conclusion
+
+* If $X$ and $Y$ are independent, they are also uncorrelated. **Independent -> Uncorrelated**, 既独立是强于不相关的束缚
+* 但是un-correlation无法推出independent
+
+## Math conclusion
+
+$$
+E[XY] = E[X]E[Y]
+$$
+-> un-correlation
+
+$$
+p_{X,Y}(x,y) = p_X(x) * p_Y(y)
+$$
+
+-> independent
+# Reference
+
+* [Uncorrelated-vs-independent.pdf](https://pinktalk.online/data_sci/basic/attachments/uncorrelated-vs-independent.pdf)
diff --git a/content/data_sci/data_sci_MOC.md b/content/data_sci/data_sci_MOC.md
new file mode 100644
index 000000000..7849aca2c
--- /dev/null
+++ b/content/data_sci/data_sci_MOC.md
@@ -0,0 +1,23 @@
+---
+title: Data science MOC
+tags:
+- data
+- statistics
+---
+
+# Basic Concept
+
+
+# [Stochastic Process](data_sci/stochastic_process/MOC.md)
+
+
+# Data visualization
+
+## Visualization Style
+
+* [The economist style graph](data_sci/visualization/visual_style/the_economist_style.md)
+
+## Visualization Tool
+
+
+
diff --git a/content/data_sci/stochastic_process/MOC.md b/content/data_sci/stochastic_process/MOC.md
new file mode 100644
index 000000000..8a4b12b6c
--- /dev/null
+++ b/content/data_sci/stochastic_process/MOC.md
@@ -0,0 +1,9 @@
+---
+title: Stochastic Process - MOC
+tags:
+ - MOC
+ - statistics
+ - stochastic-process
+---
+# Basic Concept
+
diff --git a/content/data_sci/stochastic_process/stationary_process.md b/content/data_sci/stochastic_process/stationary_process.md
new file mode 100644
index 000000000..f5d4fdf94
--- /dev/null
+++ b/content/data_sci/stochastic_process/stationary_process.md
@@ -0,0 +1,12 @@
+---
+title: Stationary Process
+tags:
+ - statistics
+ - math
+ - stochastic-process
+ - signal-processing
+---
+
+# Reference
+
+* [“平稳过程.” 维基百科,自由的百科全书, 11 Nov. 2021. _Wikipedia_, https://zh.wikipedia.org/w/index.php?title=%E5%B9%B3%E7%A8%B3%E8%BF%87%E7%A8%8B&oldid=68615985.](https://zh.wikipedia.org/zh/%E5%B9%B3%E7%A8%B3%E8%BF%87%E7%A8%8B)
\ No newline at end of file
diff --git a/content/data_sci/visualization/visual_style/attachments/CHARTstyleguide_20170505.pdf b/content/data_sci/visualization/visual_style/attachments/CHARTstyleguide_20170505.pdf
new file mode 100644
index 000000000..5e201281c
Binary files /dev/null and b/content/data_sci/visualization/visual_style/attachments/CHARTstyleguide_20170505.pdf differ
diff --git a/content/data_sci/visualization/visual_style/the_economist_style.md b/content/data_sci/visualization/visual_style/the_economist_style.md
new file mode 100644
index 000000000..fd7f0da8d
--- /dev/null
+++ b/content/data_sci/visualization/visual_style/the_economist_style.md
@@ -0,0 +1,10 @@
+---
+title: The economist style graph
+tags:
+ - data-visual
+ - statistics
+---
+# Reference
+
+* [The Economist visual style guide.pdf](https://pinktalk.online/data_sci/visual_style/attachments/CHARTstyleguide_20170505.pdf)
+* [Kavicky, Radovan. “2022-07-01 Making Economist Style Plots in Matplotlib.” _Deepnote_, https://deepnote.com/@radovankavicky/2022-07-01-Making-Economist-Style-Plots-in-Matplotlib-84f9c86d-1559-4654-b226-3dcff8be3408. Accessed 10 Oct. 2023.](https://deepnote.com/@radovankavicky/2022-07-01-Making-Economist-Style-Plots-in-Matplotlib-84f9c86d-1559-4654-b226-3dcff8be3408)
\ No newline at end of file
diff --git a/content/data_sci/visualization/visual_style/visual_information_theory.md b/content/data_sci/visualization/visual_style/visual_information_theory.md
new file mode 100644
index 000000000..1d045096d
--- /dev/null
+++ b/content/data_sci/visualization/visual_style/visual_information_theory.md
@@ -0,0 +1,10 @@
+---
+title: Visual Information Theory
+tags:
+- data
+- visualization-tech
+---
+
+# Reference
+
+[https://colah.github.io/posts/2015-09-Visual-Information/](https://colah.github.io/posts/2015-09-Visual-Information/)
\ No newline at end of file
diff --git a/content/data_sci/visualization/visual_tool/Tableau/tableau_learn_basic.md b/content/data_sci/visualization/visual_tool/Tableau/tableau_learn_basic.md
new file mode 100644
index 000000000..8f240318a
--- /dev/null
+++ b/content/data_sci/visualization/visual_tool/Tableau/tableau_learn_basic.md
@@ -0,0 +1,33 @@
+---
+title: Basic Knowledge of Tableau
+tags:
+ - visualization-tech
+ - tableau
+ - data-visual
+ - data
+---
+
+# What is Tableau
+
+Tableau是一款强大的数据可视化工具,可帮助用户将复杂的数据转化为易于理解和直观的图表和图形。以下是关于Tableau的一些主要信息:
+
+1. **数据连接:** Tableau能够连接多种数据源,包括数据库、Excel文件、云存储和其他数据存储解决方案。这使得用户可以轻松地将各种数据源整合在一起进行分析。
+
+2. **数据可视化:** Tableau的强大之处在于其数据可视化功能。用户可以使用直观的拖放界面创建各种图表,如条形图、折线图、散点图、地图等,而无需编写复杂的代码。
+
+3. **交互性:** Tableau支持交互式分析,用户可以通过点击、拖放和过滤等方式与数据进行互动。这使得用户能够深入挖掘数据,发现隐藏在背后的模式和见解。
+
+4. **仪表板:** 用户可以将创建的图表和图形组合到仪表板上,以便更全面地呈现数据故事。仪表板可以帮助用户向其他人有效地传达数据见解。
+
+5. **数据预测:** Tableau还具有一些高级功能,如内置的预测分析工具,可以帮助用户通过历史数据来预测未来趋势。
+
+6. **共享和发布:** Tableau支持将可视化结果分享给其他人。用户可以将工作簿、仪表板和可视化导出为图像、PDF或交互式Web应用程序,并将其分享给其他人。
+
+7. **Tableau Server和Tableau Online:** 这两个是Tableau的服务器端解决方案,允许用户在网络上共享和发布他们的Tableau工作簿和仪表板。Tableau Server通常部署在本地,而Tableau Online是Tableau提供的云服务。
+
+
+总体而言,Tableau是一个强大而灵活的工具,适用于各种行业和领域,从业务分析到科学研究。它使用户能够更好地理解和分享数据,从而做出更明智的决策。
+
+
+# How to use Tableau
+
diff --git a/content/dota/attachments/Pasted image 20230918135105.png b/content/dota/attachments/Pasted image 20230918135105.png
new file mode 100644
index 000000000..ed5b209c2
Binary files /dev/null and b/content/dota/attachments/Pasted image 20230918135105.png differ
diff --git a/content/dota/dota2_learning_road.md b/content/dota/dota2_learning_road.md
new file mode 100644
index 000000000..dc7fc9eb0
--- /dev/null
+++ b/content/dota/dota2_learning_road.md
@@ -0,0 +1,16 @@
+---
+title: Dota2 Learning Road
+tags:
+ - data2
+ - game
+---
+
+
+# Map 7.33
+
+
+
+* [A, Nathan, et al. “Dota 2 New Map April 2023 - Dota 2 Guide.” _IGN_, https://www.ign.com/wikis/dota-2/Dota_2_New_Map_April_2023. Accessed 18 Sept. 2023.](https://www.ign.com/wikis/dota-2/Dota_2_New_Map_April_2023)
+
+
+
diff --git a/content/food/MOC.md b/content/food/MOC.md
new file mode 100644
index 000000000..a8009dff5
--- /dev/null
+++ b/content/food/MOC.md
@@ -0,0 +1,13 @@
+---
+title: Food - MOC
+tags:
+ - food
+ - MOC
+---
+# Intro
+
+* [💯Evaluation Criteria](food/intro/evaluation_criteria.md)
+# Jude's Guide
+
+* [🥐Jude’s Guide](https://pinkr1ver.notion.site/17d0f10938f8407cb50910d24b668655?v=870bd05876044fdb91308ffa13c7ff01&pvs=4)
+# Study Note
diff --git a/content/food/intro/evaluation_criteria.md b/content/food/intro/evaluation_criteria.md
new file mode 100644
index 000000000..c1c4f7fee
--- /dev/null
+++ b/content/food/intro/evaluation_criteria.md
@@ -0,0 +1,33 @@
+---
+title: Evaluation Criteria
+tags:
+ - food
+---
+
+
+> [!abstract]
+> The evolution criteria are highly subjective and change over time. It is strongly linked to my personal subjective likes and dislikes and feelings.
+# 💯Score
+
+This guide is scored on a five-point scale.
+* This guide has a *tough scoring scale*, with 2.5 being a normal score for this guide.
+* This guide has a particular focus on *value for money*, and will consider the pricing of any dish to be reasonable.
+* Environment and service attitude will even outweigh the flavor level in this guide.
+* The score is **highly subjective!!!** , filled with intense emotional hues and exaggerated expressions
+
+# 🏆Trophies
+
+The trophy mechanism is separate from the scoring, so if a restaurant has something special, whether it's the food or the service, it has a chance to win different trophies.
+
+Here's the requirements for obtaining the trophy:
+
+* Bronze - Makes you want to come gain.
+* Sliver - Considerable value for money in the neighborhood
+* Gold - Characteristic novelty tricks in the neighborhood
+* Crystal - Original approaches
+* Elite - Best in area.
+* Legend - Best ever.
+
+# 🔥Nomination
+
+Every year, I will give a nomination list of my food this year.
\ No newline at end of file
diff --git a/content/log/2023/7/attachments/3ed5fee41bd566be093bebd62a33d12.jpg b/content/log/2023/7/attachments/3ed5fee41bd566be093bebd62a33d12.jpg
new file mode 100644
index 000000000..0fda57126
Binary files /dev/null and b/content/log/2023/7/attachments/3ed5fee41bd566be093bebd62a33d12.jpg differ
diff --git a/content/log/2023/7/attachments/7JEC(63A65[8JFI[G6O`IIK_tmb.jpg b/content/log/2023/7/attachments/7JEC(63A65[8JFI[G6O`IIK_tmb.jpg
new file mode 100644
index 000000000..c32da9b85
Binary files /dev/null and b/content/log/2023/7/attachments/7JEC(63A65[8JFI[G6O`IIK_tmb.jpg differ
diff --git a/content/log/2023/7/attachments/Pasted image 20230701220633.png b/content/log/2023/7/attachments/Pasted image 20230701220633.png
new file mode 100644
index 000000000..bee56803f
Binary files /dev/null and b/content/log/2023/7/attachments/Pasted image 20230701220633.png differ
diff --git a/content/log/2023/7/attachments/[E]{JJU87WNI}DS)${(O}MB_tmb.jpg b/content/log/2023/7/attachments/[E]{JJU87WNI}DS)${(O}MB_tmb.jpg
new file mode 100644
index 000000000..d9d10e304
Binary files /dev/null and b/content/log/2023/7/attachments/[E]{JJU87WNI}DS)${(O}MB_tmb.jpg differ
diff --git a/content/log/2023/7/attachments/cloud.png b/content/log/2023/7/attachments/cloud.png
new file mode 100644
index 000000000..d9d10e304
Binary files /dev/null and b/content/log/2023/7/attachments/cloud.png differ
diff --git a/content/log/2023/7/log_01072023.md b/content/log/2023/7/log_01072023.md
new file mode 100644
index 000000000..d97aa924a
--- /dev/null
+++ b/content/log/2023/7/log_01072023.md
@@ -0,0 +1,22 @@
+---
+title: Log 2023.07.01 - 云朵也是会动滴☁️
+tags:
+- log
+- photography
+---
+
+晚上去食堂吃饭的路上,看到了今天最棒的云
+
+
+
+就想着吃完饭再来好好拍;
+
+吃完饭,再回来的时候,云朵就已经漂远了
+云朵是会移动的,其他事情又何尝不是这样呢
+
+---
+
+
+还有一张蒋学姐拍的云☁️:
+
+
\ No newline at end of file
diff --git a/content/log/2023/7/log_03072023.md b/content/log/2023/7/log_03072023.md
new file mode 100644
index 000000000..90be43769
--- /dev/null
+++ b/content/log/2023/7/log_03072023.md
@@ -0,0 +1,11 @@
+---
+title: Log 2023.07.03 - K-means clustering algorithm for Pixel art style
+tags:
+- pixel-art
+- log
+- clustering
+---
+
+今天用k-means聚类算法做了一个像素化的效果,还蛮好玩的
+
+
\ No newline at end of file
diff --git a/content/log/2023/9/log_27092023.md b/content/log/2023/9/log_27092023.md
new file mode 100644
index 000000000..376530917
--- /dev/null
+++ b/content/log/2023/9/log_27092023.md
@@ -0,0 +1,22 @@
+---
+title: Log 2023.09.27 数学建模比赛结束的第二天
+tags:
+ - math
+ - log
+---
+
+# What I learn from MCM
+
+之前太小看数据科学家这行的深度了,在面对一堆沆赃的数据面前,执行一套什么样的标准化流程是我们应该从这次MCM中学到最重要的事情。
+
+在我看来,数据预处理,即从一大堆数据中提取最后对我们有用的table出来的本领尤为重要,因为往往现实的数据存在着缺失、异常、错误等等等,如何detect到这些问题是一个不小的难点,同时,如何在剔除掉异常点后,对每个数据做什么变化也值得人思考,或许也可以通过gird search来解决;
+
+这次的MCM中,最深刻的记忆还是加深了fitting任务的全流程,最关键的就在于使用$R^2$进行GirdSearchCV。 $R^2$是衡量拟合度的scale量,是一个非常好的fitting评价指标,越接近1越说明fitting良好。
+
+同时也也了解到了Grid Search和Random Search 的hyperparameter optimization,hyperparameter的重要性也曾经在SVM的一次任务中被教育了;hyperparameter因为无法在训练过程中习得,因此进行排列组合的尝试效果的gird search简单但是有效
+
+通过什么样的方法去做数据扩增,什么样的方法去做特征选择,通过什么样的方法取衡量变量之间的关系,data science这条路上有太多太多要我去学习的了,。注重tatistics的基础,一定要在每日的学习中继续sharpen自己的统计思维
+
+比赛内容是复杂繁重的,一个良好的流程就是一套良好的架构,但我却被乱的满地找牙
+
+**我被MCM教育了**
\ No newline at end of file
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240219235533.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240219235533.png
new file mode 100644
index 000000000..896ddd473
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240219235533.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141612.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141612.png
new file mode 100644
index 000000000..9c43b513d
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141612.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141628.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141628.png
new file mode 100644
index 000000000..9c43b513d
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141628.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141635.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141635.png
new file mode 100644
index 000000000..2dc9f279a
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141635.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141740.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141740.png
new file mode 100644
index 000000000..e6acfbe38
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141740.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141748.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141748.png
new file mode 100644
index 000000000..04797b52e
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141748.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141914.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141914.png
new file mode 100644
index 000000000..04797b52e
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225141914.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225142012.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225142012.png
new file mode 100644
index 000000000..c89bf1c4a
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225142012.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225142025.png b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225142025.png
new file mode 100644
index 000000000..9a35174e4
Binary files /dev/null and b/content/log/kk_unified_national_graduate_entrance_examination/attachments/Pasted image 20240225142025.png differ
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/kk_note.md b/content/log/kk_unified_national_graduate_entrance_examination/kk_note.md
new file mode 100644
index 000000000..b270d8151
--- /dev/null
+++ b/content/log/kk_unified_national_graduate_entrance_examination/kk_note.md
@@ -0,0 +1,32 @@
+---
+title: kk's note
+tags:
+ - log
+---
+
+## 2023.02.19
+
+
+
+
+## 2023.02.20
+
+
+
+
+
+
+## 2023.02.21
+
+
+
+
+
+## 2023.02.22
+
+
+
+## 2023.02.23
+
+
+
\ No newline at end of file
diff --git a/content/log/kk_unified_national_graduate_entrance_examination/study_plan.md b/content/log/kk_unified_national_graduate_entrance_examination/study_plan.md
new file mode 100644
index 000000000..2a64a8cf1
--- /dev/null
+++ b/content/log/kk_unified_national_graduate_entrance_examination/study_plan.md
@@ -0,0 +1,32 @@
+---
+title: kk's Unified National Graduate Entrance Examination —— Study Plan
+tags:
+ - log
+---
+
+# Plan 2.19 - 2.31
+
+
+## 英语
+
+* 50个单词
+* 每天一张卷子
+
+## 设计史
+
+* 2天一章
+
+## 检查流程
+
+* Daily汇报 —— https://docs.google.com/spreadsheets/d/1YjXogQvgYJGsnaAkCVwW5w76wowta-OMLF6D0Hvrlyo/edit#gid=0
+* Weekly开会检查 —— 周日早上9点
+* 起床检查 —— 目标7:00
+
+## 具体学习安排
+
+* 上午:7:30 - 11:45
+* 下午:1:30 - 6:00
+* 晚上:7:30 - 10:40
+* 休息:11:30
+
+
diff --git a/content/log/log_MOC.md b/content/log/log_MOC.md
new file mode 100644
index 000000000..d0a322eac
--- /dev/null
+++ b/content/log/log_MOC.md
@@ -0,0 +1,19 @@
+---
+title:
+tags:
+- log
+- daily
+- 文学
+---
+
+# Log 2023
+
+## Log 2023.07
+
+* [Log 2023.07.01 - 云朵也是会动滴☁️](log/2023/7/log_01072023.md)
+* [Log 2023.07.03 - K-means clustering algorithm for Pixel art style](log/2023/7/log_03072023.md)
+
+
+## Log 2023.09
+
+* [Log 2023.09.27 数学建模比赛结束的第二天](log/2023/9/log_27092023.md)
\ No newline at end of file
diff --git a/content/plan/exhibition/attachments/1.png b/content/plan/exhibition/attachments/1.png
new file mode 100644
index 000000000..268d8eea6
Binary files /dev/null and b/content/plan/exhibition/attachments/1.png differ
diff --git a/content/plan/exhibition/attachments/]OG11]ENC[XCB@{6VUL%)6V_tmb 1.png b/content/plan/exhibition/attachments/]OG11]ENC[XCB@{6VUL%)6V_tmb 1.png
new file mode 100644
index 000000000..268d8eea6
Binary files /dev/null and b/content/plan/exhibition/attachments/]OG11]ENC[XCB@{6VUL%)6V_tmb 1.png differ
diff --git a/content/plan/exhibition/attachments/]OG11]ENC[XCB@{6VUL%)6V_tmb.png b/content/plan/exhibition/attachments/]OG11]ENC[XCB@{6VUL%)6V_tmb.png
new file mode 100644
index 000000000..268d8eea6
Binary files /dev/null and b/content/plan/exhibition/attachments/]OG11]ENC[XCB@{6VUL%)6V_tmb.png differ
diff --git a/content/plan/exhibition/whisky_l.md b/content/plan/exhibition/whisky_l.md
new file mode 100644
index 000000000..74171d0c1
--- /dev/null
+++ b/content/plan/exhibition/whisky_l.md
@@ -0,0 +1,27 @@
+---
+title: Whisky L
+tags:
+ - dream
+ - exhibition
+---
+
+# What is whisky L
+
+
+> [!abstract]
+> From the website:
+>
+At the beginning of the China Whisky exhibition in 2009, our goal is to assist whisky brands to enter the Chinese market, to build a platform for all whisky brands, and to provide a platform for whisky enthusiasts. For the past 11 years, we have worked hard to make Whisky L! the largest whisky exhibition and spirits show of the highest quailty in Asia.
+
+
+## 2023:
+
+
+
+2023 is held in 2023 08.10 - 08.13
+
+所以一般8月份可以关注门票,已加入calendar
+
+# Reference
+
+* [索菲亚一斤半. _花99去中国最大威士忌酒展,居然喝到将近100000?_哔哩哔哩_bilibili_. https://www.bilibili.com/video/BV1zm4y1K7h2/. Accessed 21 Sept. 2023.](索菲亚一斤半. _花99去中国最大威士忌酒展,居然喝到将近100000?_哔哩哔哩_bilibili_. https://www.bilibili.com/video/BV1zm4y1K7h2/. Accessed 21 Sept. 2023.)
\ No newline at end of file
diff --git a/content/plan/life.md b/content/plan/life.md
new file mode 100644
index 000000000..e55b12878
--- /dev/null
+++ b/content/plan/life.md
@@ -0,0 +1,58 @@
+---
+title: Life List🚀
+tags:
+ - dream
+---
+
+# Exhibition
+
+* [Whiskey L](plan/exhibition/whisky_l.md)
+* 平遥电影展
+
+# Show
+
+* CS major
+* Ti
+* VCT
+* 昨日的美食 第二季
+# Film
+
+* 燃烧
+* 偶然と想像
+* 激乐人心
+* 盲井
+* ~~~昆池岩~~~
+* 玻尔 Pearl
+
+# Book
+
+* 窄门
+* 沧浪之水
+* 跃动青春
+
+# Cuisine
+
+## 天津
+
+* 奶爆两样
+
+## 台州
+
+* 麻糍煎蛋
+* 肉沫炊饭
+
+# Travel
+
+* 法罗群岛
+* 京阿尼巡礼
+* 冰岛
+
+
+# Skills
+
+* 跳伞
+
+
+# Games
+
+* Using Hitbox play Street Fighter
\ No newline at end of file
diff --git a/content/recent.md b/content/recent.md
new file mode 100644
index 000000000..2ed2bfe2b
--- /dev/null
+++ b/content/recent.md
@@ -0,0 +1,13 @@
+---
+title: Recent note
+tags:
+- recent
+- readme
+---
+
+```dataview
+table WITHOUT ID file.link AS "Title",file.mtime as "Edit Time"
+from ""
+sort file.mtime desc
+limit 10
+```
\ No newline at end of file
diff --git a/content/research_career/UWB_about/.archieve/UWB_reflected_wave_simulation_experiment_by_VNA.md b/content/research_career/UWB_about/.archieve/UWB_reflected_wave_simulation_experiment_by_VNA.md
new file mode 100644
index 000000000..958d2356b
--- /dev/null
+++ b/content/research_career/UWB_about/.archieve/UWB_reflected_wave_simulation_experiment_by_VNA.md
@@ -0,0 +1,279 @@
+---
+title: UWB reflected wave simulation experiment by VNA
+tags:
+ - research-about
+ - VNA
+ - UWB
+---
+# Experiment Equipment and Simple Introduction
+
+
+
+
+
+We use VNA device, KEYSIGHT 5063A ENA, to study about the UWB signal reflection's change. We eject the frequency sweeping signal from 100kHz - 6.5GHz from VNA port 1 , and receive reflecting signal using port 2. Then, we can get scattering parameter S11 and S21. We will do analysis on this two parameter to get some corresponding conclusion.
+
+## Detailed Step
+
+
+
+
Table. The VNA Setup for Measuring
+
+
+
Option
+
Value
+
+
+
+
RF Power
+
-5dBm
+
+
+
IF Bandwidth
+
70kHz
+
+
Averaging
+
64
+
+
Start Frequency
+
100kHz
+
+
+
Stop Frequency
+
6.5GHz
+
+
+
Number of Frequency Points
+
10001
+
+
+
+
+We will set iron pad in different distances ahead the VNA device to get scattering parameter data.
+
+# UWB Signal Decomposition and Reconstruction by Different Frequencies
+
+Considering our experiment equipment E5063A ENA Series Network analyzer has 100KHz - 6.5GHz signal range, we design a code experiment to divide a gauss-pulse UWB signal, whose center frequency is about 3.25GHz and bandwidth is about 6.5GHz.
+
+ First, generate the UWB gauss-pulse signal, here we using `scipy.signal.gausspulse`, it generates the gauss pulse signal modulated by sinusoidal function. The formula:
+
+ $$
+ x(t) = e^{j 2\pi f t} e^{-\frac{1}{2 \sigma^2} \cdot t^2}
+ $$
+
+
+```python
+# Generate UWB signal parameters
+sampling_rate = 10e13 # Sampling rate in Hz
+duration = 1e-8 # Duration of the signal in seconds
+amplitude = 1.0 # Amplitude of the UWB signal
+start_frequency = 100e3
+end_frequency = 6.5e9
+center_frequency = (start_frequency + end_frequency) / 2 # Center frequency of the UWB signal
+bandwidth = (end_frequency - start_frequency) / center_frequency # Bandwidth of the UWB signal
+
+# Generate time vector
+t = np.linspace(-duration/2, duration/2, int(sampling_rate * duration))
+
+# Generate UWB signal
+uwb_signal = signal.gausspulse(t, fc=center_frequency, bw=bandwidth)
+```
+
+Then, we apply FFT (Fast Fourier Transform) method to analyze this UWB signal spectrum, the code is like:
+
+```python
+# Perform Fourier transform on the UWB signal
+spectrum = np.fft.fft(uwb_signal)
+frequencies = np.fft.fftfreq(len(uwb_signal), d=1/sampling_rate)
+# spectrum = np.fft.fftshift(spectrum)
+# frequencies = np.fft.fftshift(frequencies)
+
+
+# Get amplitude and phase from the spectrum
+amplitude_spectrum = np.abs(spectrum)
+phase_spectrum = np.angle(spectrum)
+
+# Reconstruct UWB signal from amplitude and phase spectra
+reconstructed_signal = np.fft.ifft(amplitude_spectrum * np.exp(1j * phase_spectrum))
+
+```
+
+
+Plot this data,
+
+```python
+sorted_indices = np.argsort(frequencies)
+
+frequencies_sorted = frequencies[sorted_indices]
+amplitude_spectrum_sorted = amplitude_spectrum[sorted_indices]
+phase_spectrum_sorted = phase_spectrum[sorted_indices]
+
+# Plotting the results
+plt.figure(figsize=(10, 6))
+
+# Plot time-domain UWB signal
+plt.subplot(2, 2, 1)
+plt.plot(t, uwb_signal)
+plt.title('UWB Signal (Time Domain)')
+plt.xlabel('Time')
+plt.ylabel('Amplitude')
+
+# Plot frequency spectrum
+plt.subplot(2, 2, 2)
+plt.plot(frequencies_sorted[10:-10], amplitude_spectrum_sorted[10:-10])
+plt.title('Frequency Spectrum')
+plt.xlabel('Frequency')
+plt.ylabel('Amplitude')
+
+# Plot phase spectrum
+plt.subplot(2, 2, 3)
+plt.plot(frequencies_sorted[10:-10], phase_spectrum_sorted[10:-10])
+plt.title('Phase Spectrum')
+plt.xlabel('Frequency')
+plt.ylabel('Phase')
+
+# Plot reconstructed UWB signal
+plt.subplot(2, 2, 4)
+plt.plot(t, reconstructed_signal)
+plt.title('Reconstructed UWB Signal (Time Domain)')
+plt.xlabel('Time')
+plt.ylabel('Amplitude')
+
+plt.tight_layout()
+plt.show()
+```
+
+
+Here's result:
+
+
+
+
+Cause we only have 100KHz - 6.5GHz signal range, we want to focus on this range's data:
+
+```python
+mask = abs(frequencies) < 10e9
+frequencies_mask = frequencies[mask]
+amplitude_spectrum_mask = amplitude_spectrum[mask]
+phase_spectrum_mask = phase_spectrum[mask]
+
+reconstructed_signal = np.fft.ifft(amplitude_spectrum_mask * np.exp(1j * phase_spectrum_mask))
+
+sorted_indices = np.argsort(frequencies_mask)
+frequencies_sorted = frequencies_mask[sorted_indices]
+amplitude_spectrum_sorted = amplitude_spectrum_mask[sorted_indices]
+phase_spectrum_sorted = phase_spectrum_mask[sorted_indices]
+
+plt.figure(figsize=(10, 6))
+
+# Plot time-domain UWB signal
+plt.subplot(2, 2, 1)
+plt.plot(t, uwb_signal)
+plt.title('UWB Signal (Time Domain)')
+plt.xlabel('Time')
+plt.ylabel('Amplitude')
+
+# Plot frequency spectrum
+plt.subplot(2, 2, 2)
+plt.plot(frequencies_sorted, amplitude_spectrum_sorted)
+plt.axvline(x=3.25e9, color='r', linestyle='--')
+plt.axvline(x=6.5e9, color='r', linestyle='--')
+plt.title('Frequency Spectrum')
+plt.xlabel('Frequency')
+plt.ylabel('Amplitude')
+
+# Plot phase spectrum
+plt.subplot(2, 2, 3)
+plt.plot(frequencies_sorted, phase_spectrum_sorted)
+plt.title('Phase Spectrum')
+plt.xlabel('Frequency')
+plt.ylabel('Phase')
+
+# Plot reconstructed UWB signal
+plt.subplot(2, 2, 4)
+plt.plot(np.linspace(-duration/2, duration/2, len(reconstructed_signal)), reconstructed_signal)
+plt.title('Reconstructed UWB Signal (Time Domain)')
+plt.xlabel('Time')
+plt.ylabel('Amplitude')
+
+plt.tight_layout()
+plt.show()
+
+```
+
+The result:
+
+
+
+Here we can see that, in `scipy.signal.gausspulse` function, the signal's bandwidth controlled by fractional bandwidth parameter, which means,
+
+$$
+\text{fractional bandwidth} = \frac{\text{upper frequency of the signal} - \text{lower frequency of the signal}}{\text{center frequency}}
+$$
+
+Meanwhile, the bandwidth definition of gauss pulse means the energy has fallen to 50%
+
+We can focus more on our signal range, like that:
+
+
+
+By this gauss pulse spectrum, we have chance to reconstruct UWB signal from the VNA frequency sweeping signal.
+## Conclusion From this experiment
+
+From this experiment, we can yield that we can use different frequencies signal to reconstruct gaussian pulse to generate UWB signal, the parameters will be based on the gauss pulse spectrum.
+
+$$
+\text{UWB Signal} = \sum_{i}^N \alpha_i \sin{(\omega_i t + \phi_i)}
+$$
+
+* $\alpha_i$ represents different frequencies signals' power intensity to reconstruct UWB signal
+* $\omega_i$ represents different frequencies
+* $\phi_i$ represents different frequencies' signals' phase
+
+
+# Data Collection & Data Analysis
+
+As the experiment steps say, we get S11 parameter and S21 parameter for iron pad in different distances from the VNA device. Here we can see the result.
+
+We set the distance to 5cm-30cm, with 5cm intervals. Also, we get the the data with no iron pad ahead and we called this data 'initial'. In conclusion, here's are 7 experiments' data.
+
+The S21 parameters result:
+
+
+
+
+
+
+Also, we can compare each S11 and S21 in one graph, here:
+
+
+
+Also, we can calculate the area under curve of S21 linear format to compare which distance iron pad set we can receive more energy from reflecting wave.
+
+
+
+
+We can see that when the iron pad is near the VNA, the distance heavily influences the energy of reflect wave.
+
+## Signal wave observation
+
+Here's three signals we can observe, like this:
+
+
+
+* Frequency sweeping signal reconstructed to a UWB pulse
+* When signal after port1, there will be signal distortion. We can get the signal by S11 parameter.
+* The receiving signal, we can calculate and get it by S12 parameter.
+
+Here's the graph detail:
+
+
+
+
+
+
+
+
+# Problem
+
+* In this report, we can see that we use VNA frequency sweeping signal to construct UWB signal to do analysis, we do not generate the UWB signal directly. Don't have a good method to generate UWB signal from VNA device.
\ No newline at end of file
diff --git a/content/research_career/UWB_about/.archieve/UWB_reflected_wave_simulation_experiment_by_VNA_adjust.md b/content/research_career/UWB_about/.archieve/UWB_reflected_wave_simulation_experiment_by_VNA_adjust.md
new file mode 100644
index 000000000..36b18c961
--- /dev/null
+++ b/content/research_career/UWB_about/.archieve/UWB_reflected_wave_simulation_experiment_by_VNA_adjust.md
@@ -0,0 +1,202 @@
+---
+title: UWB reflected wave simulation experiment by VNA
+tags:
+ - VNA
+ - UWB
+ - experiment
+---
+# Objective
+
+1. A VNA based experiment setup is constructed for mimicking UWB apparatus.
+2. Produce UWB signals by synthesizing waveforms with various frequencies.
+3. Validate the setup and UWB signal transmitting/receiving processes by testing the distance among the system and a moving metal obstacle.
+
+# Introduction
+
+## UWB signal generation
+
+Our VNA device only can eject frequency sweeping signal, which means we can eject different frequency sinusoidal signal in a specific frequency range. In our device, the range is about 100kHz - 6.5GHz. So how to eject UWB signal is the key problem in our experiment.
+
+The key assumption of the experiment is that the signal is transmitted from the VNA port1 port and received from the port2 port, and this whole process is linear. Means,
+
+$$
+\text{Eject Signal}, \quad x(t) = \sum_{i}^{N}x_i(t)
+$$
+$$
+\text{Receive Signal}, \quad y(t)=\sum_{i}^N y_i(t)
+$$
+That is, we can consider that we have emitted a UWB signal by the linear superposition of the different component frequency signals of the swept signal, and superimpose the returned signals to consider that we have obtained the reflected signal of the UWB signal.
+
+We can use the fast Fourier method to obtain the magnitude values and phases of the different frequency components that make up the Gaussian pulse through simulation experiments, so that we can make corresponding guidance for the subsequent adjustment of our VNA equipment to transmit UWB signals.
+
+Here are the results of the simulation experiment,
+
+
+
Fig 1. Gaussian pulse Fourier transform to obtain magnitude and phase values of different frequency components
Fig 1. Gaussian pulse Fourier transform to obtain magnitude and phase values of different frequency components, focusing on the range we can generate in our VNA device, 100kHz - 6.5GHz
+
+Therefore, in subsequent experiments, we can adjust the components of the different frequencies of the swept signal we emit to correspond to the frequency-domain diagram in the above figure, and we can use the principle of superposition of linear systems to consider that a UWB signal is emitted, with Gaussian pulses.
+
+# Hypothesis
+
+We assume that sinusoid wave reflecting from the reflection material is a linear system. Based on this linear system, we can build the environment for studying UWB reflected signals.
+
+
+# Materials
+
+* VNA device, KEYSIGHT 5063A ENA
+* N Male to SMA Female Connector
+* UWB antenna
+* Iron plate
+
+# Method
+
+## Setup Diagram
+
+
+
Fig 3. Experimental setup overview diagram, using the VNA device to transmit a frequency swept signal from port1, after the medium reflection from port2 to get the echo signal, using the VNA to measure the scattering parameters and calculate the obtained signal of the transmitted signal
Fig 4. Experimental real-world diagram, the reflective medium used in this diagram is an iron plate
+
+
+
Table 1. The VNA Setup for Measuring
+
+
+
Option
+
Value
+
+
+
+
RF Power
+
-5dBm
+
+
+
IF Bandwidth
+
70kHz
+
+
Averaging
+
64
+
+
Start Frequency
+
100kHz
+
+
+
Stop Frequency
+
6.5GHz
+
+
+
Number of Frequency Points
+
10001
+
+
+## Procedure
+
+1. Start the VNA device.
+2. Set up the VNA measurement parameters.
+3. Set the iron plate at different distances from VNA, which are 5cm,10cm,15cm,20cm,25cm,30cm and $\infty$, which means no reflection medium will be set.
+4. Get S11 and S21 scattering parameter from the iron plate at different distances from VNA.
+
+# Results
+
+## Scattering Parameters - S11, S21
+
+First we focus on the raw data we obtained, the scattering parameters.
+
+$$
+S_{11} = \frac{V_{\text{reflected at port1}}}{V_\text{towars at port1}}
+$$
+$$
+S_{21} = \frac{V_{\text{out of port2}}}{V_{\text{towards port1}}}
+$$
+S11 is defined as the square root of the ratio of the energy reflected from the Port1 port to the input energy, and is often simplified as the ratio of the equivalent reflected voltage to the equivalent incident voltage.
+
+S21 represents the insertion loss, that is, how much energy has been transmitted to the destination (Port2), the larger the value, the better, the ideal value is 1, i.e., 0 dB, the larger the S21 the more efficient the transmission, it is generally recommended that the S21>0.7, i.e., -3 dB, is considered to have occurred the signal transmission.
+
+Here's our S21 parameter's result in logarithmic magnitude, the amplitude in dB:
+
+
+
Fig 5. The S21 result for different distance between iron plate from VNA, in logarithmic magnitude, "initial" legend means no iron plate be set.
+
+We can convert logarithmic magnitude result into linear result for convince, here's result.
+
+
Fig 6. The S21 result for different distance between iron plate from VNA, in linear, "initial" legend means no iron plate be set.
+
+
+## Energy receiving graph for different distance
+
+For the linear form of S21, we calculate the area under its curve and can qualitatively analyze the energy received by port2, here's the result:
+
+
Fig 7. Area under S21 liner curve of different distance
+
+
+## Signal Observation
+
+The different frequency components of the swept signal are superimposed to get what we consider to be the transmitted signal, based on the calculation of the S11 parameter we get the signal transmitted from the antenna after the loss, and based on the S21 parameter we get the received signal.
+
+Below is a generalized graph of the signals in each part of our experiment and graphs specific to each signal.
+
+Generalized graph of the signals in each part of our experiment:
+
+
+
Fig 8. Different signal in this experiment
+
+Graphs specific to each signal:
+
+
+
Fig 9. UWB Impulse signals obtained by linear superposition of different frequency component signals
Fig 11. UWB echo signal for different distances between iron plate and VNA
+
+
+
+
Table 2. The Peak-Peak value in receiving signal for different distance
+
+
+
Distance
+
Peak - Peak Value in t-domain signal
+
+
+
+
5cm
+
0.010178
+
+
+
10cm
+
0.006971
+
+
15cm
+
0.005052
+
+
20cm
+
0.005065
+
+
+
25cm
+
0.005073
+
+
+
30cm
+
0.005099
+
+
+
+# Conclusion and Problem
+
+## Conclusion
+
+* We successfully constructed a system for ranging UWB echo signals based on the VNA system reached, which can be demonstrated in the S21 AUC. This demonstrated that when the iron plate was placed at different distances, we clearly received echo signals with different energies, especially in the 5cm-25cm range. When coming to far-field communication, the ranging capability loses its effect. This accomplishes the purpose of our experiment
+* This is also confirmed by our reconstructed time-domain signals using the S-parameters, which also show a clear difference in the strength of the time-domain signals when the iron plate is placed at different distances, especially in the range of 5cm - 15cm, which is strongly decreasing, as can be seen in Fig 11 and Table 2.
+
+## Problem
+
+* The experimental environment needs to be improved, we do not have the equipment to clamp the board to carry out **accurate distance measurement**, the accuracy of our experiments is carried out through the human fixation of the board, the accuracy is within 1cm
+* The **lack of feed-forward circuitry** in VNAs compared to mature UWB signal transmitting devices has a significant impact on both the strength of the signal we transmit and the signal we receive, making it possible that we may not be able to receive all of the signals that we should be able to receive. Even in this case, our equipment can still be considered to have a range capability of 0 - 25cm.
\ No newline at end of file
diff --git a/content/research_career/UWB_about/UWB_signal_generate.md b/content/research_career/UWB_about/UWB_signal_generate.md
new file mode 100644
index 000000000..a6ad00b4b
--- /dev/null
+++ b/content/research_career/UWB_about/UWB_signal_generate.md
@@ -0,0 +1,84 @@
+---
+title: How to generate UWB signal
+tags:
+ - UWB
+ - signal-processing
+---
+# Actual Signals which find use in UWB systems
+
+* Gaussian-derived pulses
+* Edge-derived pulses
+* Sinc pulses
+* Truncated sine pulses
+* Chirp signals (frequency sweep)
+
+## Gaussian-derived pulses
+
+Time-domain function:
+
+$$
+T_n(t) = \frac{\tau^n (\frac{n}{2})! d^n}{n!} \frac{d^n}{d t^n} e^{-\frac{t^2}{\tau^2}}
+$$
+Frequency-domain function:
+
+
+$$
+F_n(\omega) = \frac{{\tau^n} (\frac{n}{2})!}{n!} (j\omega)^n \sqrt{\pi \tau^2} e^{-\frac{\pi^2 \omega^2}{4}}
+$$
+
+
+# Methods of generating UWB signals
+
+
+There are two methods to generate UWB signals.
+
+1. Radio Frequency (RF)/ microwave analogue techniques
+2. Digital synthesis methods such as direct digital synthesis (DSS).
+
+
+## RF, micro analogue techniques
+
+
+Modern analogue techniques make use of solid-state devices such as diodes and transistors.
+
+Diodes:
+* Step recovery diodes (SRDs)
+* Tunnel diodes
+* Schottky diodes
+
+## DDS
+
+### Specific Example of using DDS to generate UWB signal
+
+
+
+
+In this article, [Circularly Polarized Ultra-Wideband Radar System for Vital Signs Monitoring](https://ieeexplore.ieee.org/document/6491501), it uses AD9959 DDS to control UWB pulse repetition frequency (PRF). This DDS has the capability to generate sinusoids up to 250MHz at 0.1-Hz frequency tuning resolution. The DDS has four channels, one for transmitting pulse, one for storing reference pulse from receiver.
+
+The outputs from the DDS, the sinusoids will be amplified by [op-amps](signal_processing/device_and_components/op_amp.md)(Texas Instruments Incorporated OPA699, in this article). After amplifying, the signal will be fed to [step recovery diode](signal_processing/device_and_components/SRD.md)(SRD).
+
+The **cascaded shunt mode SRD** with **decreasing lifetime method** of pulse generation produces high amplitude pulses of 3 $V_{p-p}$ at low PRFs (megahertz range), thus the pulse generator can directly drive the antenna subsystem saving the need for expensive broadband power amplifiers
+
+This method is introduced in other article, [Chan, K. K. M., et al. “Efficient Passive Low-Rate Pulse Generator for Ultra-Wideband Radar.” _IET Microwaves, Antennas & Propagation_, vol. 4, no. 12, 2010, p. 2196. _DOI.org (Crossref)_, https://doi.org/10.1049/iet-map.2010.0030.](http://dx.doi.org/10.1049/iet-map.2010.0030). It says that, a commonly used scheme for UWB pulse generation is the **shunt-mode SRD impulse generator**.
+
+
+
+
+
+The source, 50 $\Omega$ waveform generator, generates CWs with amplitudes greater than turn-on voltages of SRD1 and SRD2. L1 and SRD1 form the first stage of the shunt-mode harmonic generator, whereas L2 and SRD2 form the second stage. First stage is cascaded with second stage in series configuration.
+
+For different UWB applications, such as 5GHz, 10GHz, ..., we can calculated components' parameters to design. The key formulas are,
+
+$$
+\text{Pulse Width} = \pi \sqrt{LC}
+$$
+
+$$
+\text{Time constant} = RC
+$$
+After SRDs based pulse generator, the CWs will transform to UWB pulse. The last step is eject the UWB signal by feed network and antenna.
+
+# Reference
+
+* [Papers Read in 2023.11](research_career/papers_read/papers_2023_11.md)
+* [Chan, K. K. M., et al. “Efficient Passive Low-Rate Pulse Generator for Ultra-Wideband Radar.” _IET Microwaves, Antennas & Propagation_, vol. 4, no. 12, 2010, p. 2196. _DOI.org (Crossref)_, https://doi.org/10.1049/iet-map.2010.0030.](http://dx.doi.org/10.1049/iet-map.2010.0030).
\ No newline at end of file
diff --git a/content/research_career/UWB_about/attachments/Amp_AUC 1.png b/content/research_career/UWB_about/attachments/Amp_AUC 1.png
new file mode 100644
index 000000000..c4e2fbaf3
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Amp_AUC 1.png differ
diff --git a/content/research_career/UWB_about/attachments/Amp_AUC 2.png b/content/research_career/UWB_about/attachments/Amp_AUC 2.png
new file mode 100644
index 000000000..db4066696
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Amp_AUC 2.png differ
diff --git a/content/research_career/UWB_about/attachments/Amp_AUC.png b/content/research_career/UWB_about/attachments/Amp_AUC.png
new file mode 100644
index 000000000..dacd71dd7
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Amp_AUC.png differ
diff --git a/content/research_career/UWB_about/attachments/Figure_1 1.png b/content/research_career/UWB_about/attachments/Figure_1 1.png
new file mode 100644
index 000000000..fb2b91fda
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Figure_1 1.png differ
diff --git a/content/research_career/UWB_about/attachments/Figure_1 2.png b/content/research_career/UWB_about/attachments/Figure_1 2.png
new file mode 100644
index 000000000..dc1d83bdd
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Figure_1 2.png differ
diff --git a/content/research_career/UWB_about/attachments/Figure_1.png b/content/research_career/UWB_about/attachments/Figure_1.png
new file mode 100644
index 000000000..8ce9fa2bf
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Figure_1.png differ
diff --git a/content/research_career/UWB_about/attachments/Figure_2.png b/content/research_career/UWB_about/attachments/Figure_2.png
new file mode 100644
index 000000000..7cc80a044
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Figure_2.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231102151328.png b/content/research_career/UWB_about/attachments/Pasted image 20231102151328.png
new file mode 100644
index 000000000..750c63686
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231102151328.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231102164316.png b/content/research_career/UWB_about/attachments/Pasted image 20231102164316.png
new file mode 100644
index 000000000..c2c385855
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231102164316.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231123213305.png b/content/research_career/UWB_about/attachments/Pasted image 20231123213305.png
new file mode 100644
index 000000000..96a8cc560
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231123213305.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231123213936.png b/content/research_career/UWB_about/attachments/Pasted image 20231123213936.png
new file mode 100644
index 000000000..d2b07c5d1
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231123213936.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231123231126.png b/content/research_career/UWB_about/attachments/Pasted image 20231123231126.png
new file mode 100644
index 000000000..82ec2a1c4
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231123231126.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231124002338.png b/content/research_career/UWB_about/attachments/Pasted image 20231124002338.png
new file mode 100644
index 000000000..82ec2a1c4
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231124002338.png differ
diff --git a/content/research_career/UWB_about/attachments/Pasted image 20231124002343.png b/content/research_career/UWB_about/attachments/Pasted image 20231124002343.png
new file mode 100644
index 000000000..82ec2a1c4
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Pasted image 20231124002343.png differ
diff --git a/content/research_career/UWB_about/attachments/S 1.png b/content/research_career/UWB_about/attachments/S 1.png
new file mode 100644
index 000000000..a3c7b935c
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S 1.png differ
diff --git a/content/research_career/UWB_about/attachments/S 2.png b/content/research_career/UWB_about/attachments/S 2.png
new file mode 100644
index 000000000..a3c7b935c
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S 2.png differ
diff --git a/content/research_career/UWB_about/attachments/S.png b/content/research_career/UWB_about/attachments/S.png
new file mode 100644
index 000000000..39954eeb5
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S.png differ
diff --git a/content/research_career/UWB_about/attachments/S21_adjust 1.png b/content/research_career/UWB_about/attachments/S21_adjust 1.png
new file mode 100644
index 000000000..8a88584a3
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S21_adjust 1.png differ
diff --git a/content/research_career/UWB_about/attachments/S21_adjust 2.png b/content/research_career/UWB_about/attachments/S21_adjust 2.png
new file mode 100644
index 000000000..1e8e4b273
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S21_adjust 2.png differ
diff --git a/content/research_career/UWB_about/attachments/S21_adjust 3.png b/content/research_career/UWB_about/attachments/S21_adjust 3.png
new file mode 100644
index 000000000..242ac4259
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S21_adjust 3.png differ
diff --git a/content/research_career/UWB_about/attachments/S21_adjust.png b/content/research_career/UWB_about/attachments/S21_adjust.png
new file mode 100644
index 000000000..958ee6b32
Binary files /dev/null and b/content/research_career/UWB_about/attachments/S21_adjust.png differ
diff --git a/content/research_career/UWB_about/attachments/Transceiver Board Testing.pdf b/content/research_career/UWB_about/attachments/Transceiver Board Testing.pdf
new file mode 100644
index 000000000..851a8069c
Binary files /dev/null and b/content/research_career/UWB_about/attachments/Transceiver Board Testing.pdf differ
diff --git a/content/research_career/UWB_about/attachments/UWB_board_test.pdf b/content/research_career/UWB_about/attachments/UWB_board_test.pdf
new file mode 100644
index 000000000..d4a591ce5
Binary files /dev/null and b/content/research_career/UWB_about/attachments/UWB_board_test.pdf differ
diff --git a/content/research_career/UWB_about/attachments/all.png b/content/research_career/UWB_about/attachments/all.png
new file mode 100644
index 000000000..a3b6f31bb
Binary files /dev/null and b/content/research_career/UWB_about/attachments/all.png differ
diff --git a/content/research_career/UWB_about/attachments/antenna.png b/content/research_career/UWB_about/attachments/antenna.png
new file mode 100644
index 000000000..b28d8869c
Binary files /dev/null and b/content/research_career/UWB_about/attachments/antenna.png differ
diff --git a/content/research_career/UWB_about/attachments/eject_signal_01ns.png b/content/research_career/UWB_about/attachments/eject_signal_01ns.png
new file mode 100644
index 000000000..b7e0295dd
Binary files /dev/null and b/content/research_career/UWB_about/attachments/eject_signal_01ns.png differ
diff --git a/content/research_career/UWB_about/attachments/eject_signal_after_port.png b/content/research_career/UWB_about/attachments/eject_signal_after_port.png
new file mode 100644
index 000000000..f67cc480b
Binary files /dev/null and b/content/research_career/UWB_about/attachments/eject_signal_after_port.png differ
diff --git a/content/research_career/UWB_about/attachments/eject_value.png b/content/research_career/UWB_about/attachments/eject_value.png
new file mode 100644
index 000000000..5211eb24b
Binary files /dev/null and b/content/research_career/UWB_about/attachments/eject_value.png differ
diff --git a/content/research_career/UWB_about/attachments/exp_set.png b/content/research_career/UWB_about/attachments/exp_set.png
new file mode 100644
index 000000000..29c4f3d90
Binary files /dev/null and b/content/research_career/UWB_about/attachments/exp_set.png differ
diff --git a/content/research_career/UWB_about/attachments/fit 1.png b/content/research_career/UWB_about/attachments/fit 1.png
new file mode 100644
index 000000000..69b86d3cc
Binary files /dev/null and b/content/research_career/UWB_about/attachments/fit 1.png differ
diff --git a/content/research_career/UWB_about/attachments/fit 2.png b/content/research_career/UWB_about/attachments/fit 2.png
new file mode 100644
index 000000000..5a10a78fa
Binary files /dev/null and b/content/research_career/UWB_about/attachments/fit 2.png differ
diff --git a/content/research_career/UWB_about/attachments/fit.png b/content/research_career/UWB_about/attachments/fit.png
new file mode 100644
index 000000000..be744e8b0
Binary files /dev/null and b/content/research_career/UWB_about/attachments/fit.png differ
diff --git a/content/research_career/UWB_about/attachments/peak_peak.png b/content/research_career/UWB_about/attachments/peak_peak.png
new file mode 100644
index 000000000..fb018c57d
Binary files /dev/null and b/content/research_career/UWB_about/attachments/peak_peak.png differ
diff --git a/content/research_career/UWB_about/attachments/receive 1.png b/content/research_career/UWB_about/attachments/receive 1.png
new file mode 100644
index 000000000..3e1bb6aca
Binary files /dev/null and b/content/research_career/UWB_about/attachments/receive 1.png differ
diff --git a/content/research_career/UWB_about/attachments/receive 2.png b/content/research_career/UWB_about/attachments/receive 2.png
new file mode 100644
index 000000000..e162a388f
Binary files /dev/null and b/content/research_career/UWB_about/attachments/receive 2.png differ
diff --git a/content/research_career/UWB_about/attachments/receive.png b/content/research_career/UWB_about/attachments/receive.png
new file mode 100644
index 000000000..3e1bb6aca
Binary files /dev/null and b/content/research_career/UWB_about/attachments/receive.png differ
diff --git a/content/research_career/UWB_about/attachments/receiving_signal 1.png b/content/research_career/UWB_about/attachments/receiving_signal 1.png
new file mode 100644
index 000000000..ad969f2e5
Binary files /dev/null and b/content/research_career/UWB_about/attachments/receiving_signal 1.png differ
diff --git a/content/research_career/UWB_about/attachments/receiving_signal.md b/content/research_career/UWB_about/attachments/receiving_signal.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/content/research_career/UWB_about/attachments/set.png b/content/research_career/UWB_about/attachments/set.png
new file mode 100644
index 000000000..3a366a160
Binary files /dev/null and b/content/research_career/UWB_about/attachments/set.png differ
diff --git a/content/research_career/UWB_about/attachments/setup.png b/content/research_career/UWB_about/attachments/setup.png
new file mode 100644
index 000000000..d05f7acb1
Binary files /dev/null and b/content/research_career/UWB_about/attachments/setup.png differ
diff --git a/content/research_career/UWB_about/attachments/signal_obersvation.png b/content/research_career/UWB_about/attachments/signal_obersvation.png
new file mode 100644
index 000000000..9aba766c4
Binary files /dev/null and b/content/research_career/UWB_about/attachments/signal_obersvation.png differ
diff --git a/content/research_career/UWB_about/attachments/signal_observation.png b/content/research_career/UWB_about/attachments/signal_observation.png
new file mode 100644
index 000000000..4d8f32482
Binary files /dev/null and b/content/research_career/UWB_about/attachments/signal_observation.png differ
diff --git a/content/research_career/UWB_about/attachments/signal_observation.tif b/content/research_career/UWB_about/attachments/signal_observation.tif
new file mode 100644
index 000000000..7786b8a74
Binary files /dev/null and b/content/research_career/UWB_about/attachments/signal_observation.tif differ
diff --git a/content/research_career/UWB_about/attachments/signal_receive.png b/content/research_career/UWB_about/attachments/signal_receive.png
new file mode 100644
index 000000000..88071806a
Binary files /dev/null and b/content/research_career/UWB_about/attachments/signal_receive.png differ
diff --git a/content/research_career/UWB_about/attachments/微信图片_20231115140355.jpg b/content/research_career/UWB_about/attachments/微信图片_20231115140355.jpg
new file mode 100644
index 000000000..b18d8568e
Binary files /dev/null and b/content/research_career/UWB_about/attachments/微信图片_20231115140355.jpg differ
diff --git a/content/research_career/UWB_about/flight_time/flight_time_solution.md b/content/research_career/UWB_about/flight_time/flight_time_solution.md
new file mode 100644
index 000000000..c74731233
--- /dev/null
+++ b/content/research_career/UWB_about/flight_time/flight_time_solution.md
@@ -0,0 +1,15 @@
+---
+title: Solution for UWB time of flight
+tags:
+ - UWB
+ - signal-processing
+ - VNA
+---
+
+
+#
+
+
+# Reference
+
+* [KG, Rohde &. Schwarz GmbH &. Co. _Accurately Measure Your UWB Device’s Time of Flight_. https://www.rohde-schwarz.com/cz/applications/accurately-measure-your-uwb-device-s-time-of-flight-application-card_56279-1250689.html. Accessed 13 Dec. 2023.](https://www.rohde-schwarz.com/cz/applications/accurately-measure-your-uwb-device-s-time-of-flight-application-card_56279-1250689.html)
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/OPAMP_health_test.md b/content/research_career/UWB_about/report/OPAMP_health_test.md
new file mode 100644
index 000000000..cfa459990
--- /dev/null
+++ b/content/research_career/UWB_about/report/OPAMP_health_test.md
@@ -0,0 +1,115 @@
+---
+title: UWB Board OPA699 I/O Test
+tags:
+ - hardware
+ - UWB
+---
+# OPA699 Pin Configuration
+
+
+
+# Board Realistic Diagram
+
+
+
+
+We call OPAM above as OPAM_1, OPAM below as OPAM_2
+
+
+# Test Method
+
+
+
+* Connect oscilloscope to inverting input pin and noninverting input pin.
+* Connect oscilloscope to Gnd and inverting input pin.
+* Connect oscilloscope to Gnd and noninverting input pin.
+* Connect oscilloscope to Gnd and output pin.
+* Connect oscilloscope to Gnd and NC pin.
+
+
+# Results
+
+## OPAM_1
+
+### Pin 2,3 Differential Input
+
+
+
+> [!summary]
+> Peak-Peak Amp ——500mV
+>
+> Frequency —— 10.00MHz
+
+
+### Pin 2 Inverting Input
+
+
+
+> [!summary]
+> Peak-Peak Amp ——392.0mV
+>
+> Frequency —— 9.97MHz
+
+### Pin 6 - Output
+
+
+
+> [!summary]
+> Peak-Peak Amp —— 4.32v
+>
+> Frequency —— 10.01MHz
+
+
+## OPAM_2
+
+### Pin 2,3 Differential Input
+
+
+
+> [!summary]
+> Peak-Peak Amp ——500mV
+>
+> Frequency —— 10.00MHz
+
+
+### Pin 2 Inverting Input
+
+
+
+> [!summary]
+> Peak-Peak Amp ——576.0mV
+>
+> Frequency —— 10.11MHz
+
+
+### Pin 3 Noninverting Input
+
+
+
+> [!summary]
+> Peak-Peak Amp ——368.0mV
+>
+> Frequency —— 10.02MHz
+
+
+### Pin 6 - Output
+
+
+
+> [!summary]
+> Peak-Peak Amp ——4.44V
+>
+> Frequency —— 10.03MHz
+
+
+
diff --git a/content/research_career/UWB_about/report/UWB_Board_AMP_circuit_test_1_1.md b/content/research_career/UWB_about/report/UWB_Board_AMP_circuit_test_1_1.md
new file mode 100644
index 000000000..64b6d5e00
--- /dev/null
+++ b/content/research_career/UWB_about/report/UWB_Board_AMP_circuit_test_1_1.md
@@ -0,0 +1,82 @@
+---
+title: UWB Board OPA699 I/O Test 1.1
+tags:
+ - hardware
+ - UWB
+---
+# OPA699 Pin Configuration
+
+
+
+
+# Output Pin 5-8 Result
+
+## Pin 5 Result
+
+
+
+> [!abstract]
+> peak - peak AMP —— 520.0mV
+
+
+## Pin 6 Result (Output)
+
+
+
+> [!abstract]
+> peak - peak AMP —— 4.36V
+>
+> Cycle-Time —— 99.90ns
+
+
+## Pin 7 Result
+
+
+
+
+
+> [!abstract]
+> peak - peak AMP —— 240.mV
+
+
+## Pin 8 Result
+
+
+
+> [!abstract]
+> peak - peak AMP —— 520.0mV
+
+
+# R51 Amp Result
+
+
+
+
+
+> [!abstract]
+> peak - peak AMP —— 4.40V
+>
+> Cycle-Time —— 99.98ns
+
+As same as OPA699 output.
+
+Expected ones:
+
+
+
+
+# R44 Amp Result
+
+
+
+
+
+> [!abstract]
+> peak - peak AMP —— 11.60V
+>
+> Cycle-Time —— 99.98ns
+
+
+Expected ones:
+
+
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/UWB_board_test.md b/content/research_career/UWB_about/report/UWB_board_test.md
new file mode 100644
index 000000000..c31296a56
--- /dev/null
+++ b/content/research_career/UWB_about/report/UWB_board_test.md
@@ -0,0 +1,31 @@
+---
+title: UWB Board Port 3 Test
+tags:
+ - UWB
+ - hardware
+---
+# Receiver Board Testing
+
+
+
+
Fig 1. Reference Schematic
+
+We used a 200kHz DAC to test Expanded Signal, using a metal plate to move back and forth in front of the antenna.
+
+## Expanded Output Records
+
+
+
+
+
+
+
+Pulse width is 0.002 sec, as same as the expected ones:
+
+
+
+
+But the pulse shape is not alike. Our pulse shape have some waves before major impulse.
+
+
+
diff --git a/content/research_career/UWB_about/report/UWB_device_test_overview.md b/content/research_career/UWB_about/report/UWB_device_test_overview.md
new file mode 100644
index 000000000..113792352
--- /dev/null
+++ b/content/research_career/UWB_about/report/UWB_device_test_overview.md
@@ -0,0 +1,11 @@
+---
+title: UWB Device Test Overview
+tags:
+ - UWB
+ - devices
+ - hardware
+---
+# Key files
+
+* [Transceiver Board Testing.pdf](https://pinktalk.online/research_career/UWB_about/attachments/Transceiver%20Board%20Testing.pdf)
+
diff --git a/content/research_career/UWB_about/report/UWB_device_test_report.md b/content/research_career/UWB_about/report/UWB_device_test_report.md
new file mode 100644
index 000000000..7fd2421d0
--- /dev/null
+++ b/content/research_career/UWB_about/report/UWB_device_test_report.md
@@ -0,0 +1,68 @@
+---
+title: UWB Device Setup Test Report
+tags:
+ - UWB
+ - "#hardware"
+---
+
+# Voltage Test Table
+
+## UWB Board Input Voltage Table
+
+
+
+
+### Test Method 1
+
+
+
+| **Demanding Amp** | **Real Amp** |
+| ---- | ---- |
+| Gnd | Gnd |
+| +3.3v | +3.32v |
+| +5v | +4.99v |
+| -5v | -11.42v |
+| +21.5v | +21.4v |
+| -16.8v | -17.1v |
+| +21.5v | +21.4v |
+| -16.8v | -17.1v |
+| -8v | -8v |
+| +8v | +8v |
+
+### Test Method 2
+
+
+
+
+|**Demanding Amp** |**Real Amp** |
+|---|---|
+|Gnd|Gnd|
+|+3.3v|+2.8v |
+|+5v|+4.2v |
+|-5v|+3.3v |
+|+21.5v|+21.0v |
+|-16.8v|-17.6v |
+|+21.5v|+21.0v |
+|-16.8v|-17.6v |
+|-8v|-8.5v |
+|+8v|+7.6v |
+
+
+## Power Board Output Voltage Table
+
+
+
+| **Demanding Amp** | **Real Amp** |
+| ------------- | -------- |
+| Gnd | Gnd |
+| +28v | +28.4v |
+| -17v | -17.18v |
+| +21v | +21.4v |
+| -10v | -10.01v |
+| +12v | 12.35v |
+| -8v | -8.13v |
+| +8v | 8.05v |
+| -5v | -11.43v |
+| +5v | 4.98v |
+| +3.3v | 3.32v |
+
diff --git a/content/research_career/UWB_about/report/UWB_device_test_report_update.md b/content/research_career/UWB_about/report/UWB_device_test_report_update.md
new file mode 100644
index 000000000..c0bd0d584
--- /dev/null
+++ b/content/research_career/UWB_about/report/UWB_device_test_report_update.md
@@ -0,0 +1,29 @@
+---
+title: UWB Device Setup Test Report, Voltage Test
+tags:
+ - UWB
+ - hardware
+ - devices
+---
+
+# Voltage Test Table
+
+## UWB Board Input Voltage Table
+
+
+
+
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/UWB_signal_generate.md b/content/research_career/UWB_about/report/UWB_signal_generate.md
new file mode 100644
index 000000000..a6ad00b4b
--- /dev/null
+++ b/content/research_career/UWB_about/report/UWB_signal_generate.md
@@ -0,0 +1,84 @@
+---
+title: How to generate UWB signal
+tags:
+ - UWB
+ - signal-processing
+---
+# Actual Signals which find use in UWB systems
+
+* Gaussian-derived pulses
+* Edge-derived pulses
+* Sinc pulses
+* Truncated sine pulses
+* Chirp signals (frequency sweep)
+
+## Gaussian-derived pulses
+
+Time-domain function:
+
+$$
+T_n(t) = \frac{\tau^n (\frac{n}{2})! d^n}{n!} \frac{d^n}{d t^n} e^{-\frac{t^2}{\tau^2}}
+$$
+Frequency-domain function:
+
+
+$$
+F_n(\omega) = \frac{{\tau^n} (\frac{n}{2})!}{n!} (j\omega)^n \sqrt{\pi \tau^2} e^{-\frac{\pi^2 \omega^2}{4}}
+$$
+
+
+# Methods of generating UWB signals
+
+
+There are two methods to generate UWB signals.
+
+1. Radio Frequency (RF)/ microwave analogue techniques
+2. Digital synthesis methods such as direct digital synthesis (DSS).
+
+
+## RF, micro analogue techniques
+
+
+Modern analogue techniques make use of solid-state devices such as diodes and transistors.
+
+Diodes:
+* Step recovery diodes (SRDs)
+* Tunnel diodes
+* Schottky diodes
+
+## DDS
+
+### Specific Example of using DDS to generate UWB signal
+
+
+
+
+In this article, [Circularly Polarized Ultra-Wideband Radar System for Vital Signs Monitoring](https://ieeexplore.ieee.org/document/6491501), it uses AD9959 DDS to control UWB pulse repetition frequency (PRF). This DDS has the capability to generate sinusoids up to 250MHz at 0.1-Hz frequency tuning resolution. The DDS has four channels, one for transmitting pulse, one for storing reference pulse from receiver.
+
+The outputs from the DDS, the sinusoids will be amplified by [op-amps](signal_processing/device_and_components/op_amp.md)(Texas Instruments Incorporated OPA699, in this article). After amplifying, the signal will be fed to [step recovery diode](signal_processing/device_and_components/SRD.md)(SRD).
+
+The **cascaded shunt mode SRD** with **decreasing lifetime method** of pulse generation produces high amplitude pulses of 3 $V_{p-p}$ at low PRFs (megahertz range), thus the pulse generator can directly drive the antenna subsystem saving the need for expensive broadband power amplifiers
+
+This method is introduced in other article, [Chan, K. K. M., et al. “Efficient Passive Low-Rate Pulse Generator for Ultra-Wideband Radar.” _IET Microwaves, Antennas & Propagation_, vol. 4, no. 12, 2010, p. 2196. _DOI.org (Crossref)_, https://doi.org/10.1049/iet-map.2010.0030.](http://dx.doi.org/10.1049/iet-map.2010.0030). It says that, a commonly used scheme for UWB pulse generation is the **shunt-mode SRD impulse generator**.
+
+
+
+
+
+The source, 50 $\Omega$ waveform generator, generates CWs with amplitudes greater than turn-on voltages of SRD1 and SRD2. L1 and SRD1 form the first stage of the shunt-mode harmonic generator, whereas L2 and SRD2 form the second stage. First stage is cascaded with second stage in series configuration.
+
+For different UWB applications, such as 5GHz, 10GHz, ..., we can calculated components' parameters to design. The key formulas are,
+
+$$
+\text{Pulse Width} = \pi \sqrt{LC}
+$$
+
+$$
+\text{Time constant} = RC
+$$
+After SRDs based pulse generator, the CWs will transform to UWB pulse. The last step is eject the UWB signal by feed network and antenna.
+
+# Reference
+
+* [Papers Read in 2023.11](research_career/papers_read/papers_2023_11.md)
+* [Chan, K. K. M., et al. “Efficient Passive Low-Rate Pulse Generator for Ultra-Wideband Radar.” _IET Microwaves, Antennas & Propagation_, vol. 4, no. 12, 2010, p. 2196. _DOI.org (Crossref)_, https://doi.org/10.1049/iet-map.2010.0030.](http://dx.doi.org/10.1049/iet-map.2010.0030).
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/VNA_based_UWB_echo_signal_experiment2.md b/content/research_career/UWB_about/report/VNA_based_UWB_echo_signal_experiment2.md
new file mode 100644
index 000000000..bd494b00c
--- /dev/null
+++ b/content/research_career/UWB_about/report/VNA_based_UWB_echo_signal_experiment2.md
@@ -0,0 +1,124 @@
+---
+title: VNA based UWB echo signal Experiment Part 2.
+tags:
+ - report
+ - experiment-report
+ - VNA
+ - UWB
+ - signal-processing
+---
+## Objective
+
+1. Measurement for building a safe experimental environment.
+2. Reproduced to build the experimental environment and determine the MUTUAL coupling signal in the experimental environment.
+3. Comparison of the reflective properties of wood and metal materials.
+5. Construction of time-of-flight calculations in UWB reflection signals under VNA-based systems.
+
+
+## Materials
+
+* VNA device, KEYSIGHT 5063A ENA
+* N Male to SMA Female Connector
+* Coaxial Cable
+* UWB antenna, provided by Gary (Shenzhen), bandwidth 3 - 20GHz
+* Metal plate, as a wave full reflection obstacle, size 34cm
+* Wood plate,
+
+
Fig 1. Experimental setup overview diagram, using the VNA device to transmit a frequency swept signal from Port1, after the medium reflection from Port2 to get the echo signal, using the VNA to measure the scattering parameters and calculate the obtained signal of the transmitted signal
+
+
+### Mutual Coupling Signal
+
+Mutual coupling signal refers to the phenomenon where electromagnetic fields from one circuit or antenna influence the behavior of another nearby circuit or antenna. It occurs when the electromagnetic fields generated by one element couple or interact with the other elements in the vicinity.
+
+To get the mutual coupling signal, fix the antennas in a fixed position with a spacing of 8 cm (it can also vary), make sure no object is nearby, and measure the S21 signals. Convert it to a time signal called mutual coupling signal (Sa(t)).
+
+$$
+Sa(t) = \sum_{i}^N S_{21} * A_{\text{gaussian}} * \cos{(\omega_i t + \phi_i)}
+$$
+
+## Procedure
+
+1. Start the VNA device.
+2. Set up the VNA measurement parameters.
+3. Connect antenna to the coaxial cable.
+4. Measure the mutual coupling signal based on the methods mentioned above.
+5. Set the wood plate at different distances from VNA, 5cm - 25cm.
+6. Get S11 and S21 scattering parameter from the wood plate at different distances from VNA.
+7. Analyze raw data and compare the data with metal plate reflection signal data from experiment 1.
+
+## Results
+
+### Mutual Coupling Signal
+
+
+### Metal Plate vs. Wood Plate echo signal
+
+## Conclusion and Future Experiment Plan
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/VNA_based_simulation_of_UWB_signals.md b/content/research_career/UWB_about/report/VNA_based_simulation_of_UWB_signals.md
new file mode 100644
index 000000000..b7fddedb5
--- /dev/null
+++ b/content/research_career/UWB_about/report/VNA_based_simulation_of_UWB_signals.md
@@ -0,0 +1,254 @@
+---
+title: VNA based simulation of UWB signals
+tags:
+ - VNA
+ - UWB
+ - report
+---
+## Objective
+1. A VNA based experiment setup is constructed for mimicking UWB apparatus.
+
+2. Produce UWB signals by synthesizing waveforms with various frequencies.
+
+3. Validate the setup and UWB signal transmitting/receiving processes by testing the distance among the system and a moving metal obstacle.
+
+## Hypothesis
+
+It is assumed that wave reflecting procedure is linear
+
+## Materials
+
+* VNA device, KEYSIGHT 5063A ENA
+* N Male to SMA Female Connector
+* UWB antenna, provided by Gary (Shenzhen), bandwidth 3 - 20GHz
+* Metal plate, as a wave full reflection obstacle, size 34cm
+
+
+
Fig 2. Experimental setup overview diagram, using the VNA device to transmit a frequency swept signal from Port1, after the medium reflection from Port2 to get the echo signal, using the VNA to measure the scattering parameters and calculate the obtained signal of the transmitted signal
Fig 3. Experimental real-world diagram, the reflective medium used in this diagram is an iron plate
+
+
+
Table 1. The VNA Setup for Measuring
+
+
+
Option
+
Value
+
+
+
+
RF Power
+
-5dBm
+
+
+
IF Bandwidth
+
70kHz
+
+
Averaging
+
64
+
+
Start Frequency
+
100kHz
+
+
+
Stop Frequency
+
6.5GHz
+
+
+
Number of Frequency Points
+
10001
+
+
+
+### UWB Signal Generation
+
+The present VNA device only can eject sweeping signals, with frequency range 100kHz - 6.5GHz. How to produce UWB signal is the key problem in the experiment.
+
+The key assumption of the experiment is that the signal is transmitted from the VNA Port1 port and received at Port2, and the whole process is linear. Thus,
+
+$$
+\text{Eject Signal}, \quad x(t) = \sum_{i}^{N}x_i(t)
+$$
+$$
+\text{Receive Signal}, \quad y(t)=\sum_{i}^N y_i(t)
+$$
+That is, a UWB signal can be represented by a linear superposition of the swept signals with varying frequency, and at the receiving port, by summing the returned signals responding to the swept emitting waves one can get the full reflected UWB signal.
+
+We can follow the spectrogram of the Gaussian pulse modulated by the cosine signal to adjust the intensity of the signals of different frequencies of our sweep signal, and then get the Gaussian pulse modulated by the cosine signal by summing up these signals, and the following is the mathematical formula analysis and the resultant figure
+
+Gaussian Impulse Signal modulated by the cosine signal:
+$$
+x(t) = \cos(\omega_c t)\cdot e^{[-\frac{1}{2\sigma^2} \cdot t^2]} = \cos(\omega_c t)\cdot e^{-a\cdot t^2}
+$$
+
+where the center frequency $f_c$ is the frequency of the cosine signal and the bandwidth of the pulse is controlled by the parameter $a$ of the Gaussian signal.
+
+By following two Fourier transform pairs we can obtain the spectrogram of a Gaussian pulse,
+
+$$
+e^{-at^2} \leftrightarrow \frac{1}{\sqrt{2a}}e^{-\frac{\omega^2}{4a}}
+$$
+$$
+x(t)cos(\omega_0 t) \leftrightarrow \frac{1}{2}[X(\omega + \omega_0) + X(\omega - \omega_0)]
+$$
+
+So, spectrum of cosine signal modulation gaussian pulse:
+
+$$
+X(\omega) = \frac{1}{\sqrt{2a}} [e^{-\frac{(\omega + \omega_c)^2}{4a}} + e^{-\frac{(\omega - \omega_c)^2}{4a}}]
+$$
+Here's the result,
+
+
+
Fig 4. The two diagrams above are the magnitude spectrum and phase spectrum derived from the Fourier transform of a Gaussian pulse signal modulated by a cosine signal, the third diagram is the generative gaussian pulse by summing up all frequencies signal by adjusting their magnitude
+
+The two diagrams above are the magnitude spectrum and phase spectrum derived from the Fourier transform of a Gaussian pulse signal modulated by a cosine signal. In generating Gaussian pulses, we consider the spectrogram energy drop to 50% as the termination of the bandwidth. We can get a Gaussian pulse by summing the frequencies we have adjusted according to the above magnitude spectrum and phase spectrum. Under the assumption of a linear system, the swept signal that is emitted can be emitted by us as a Gaussian pulse, and the emitted signal looks as shown in the figure.
+
+
+### Scattering Parameters
+
+
+$$
+S_{11} = \frac{V_{\text{reflected at port1}}}{V_\text{towars at port1}}
+$$
+$$
+S_{21} = \frac{V_{\text{out of port2}}}{V_{\text{towards port1}}}
+$$
+
+$S_{11}$ is defined as the square root of the ratio of the energy reflected from the Port1 port to the input energy, and is often simplified as the ratio of the equivalent reflected voltage to the equivalent incident voltage.
+
+$S_{21}$ represents the insertion loss, that is, how much energy has been transmitted to the destination (Port2), the larger the value, the better, the ideal value is 1, i.e., 0 dB, the larger the S21 the more efficient the transmission, it is generally recommended that the $S_{21}$>0.7, i.e., -3 dB, is considered to have occurred the signal transmission.
+
+### Time-domain Signal Construction by Scattering Parameters
+
+We can compute the time-domain signal from the scattering parameters, where the transmitted signal can be obtained from the S11 parameter and the received signal from the S21 parameter.
+
+$$
+\text{Eject Signal} = \sum_{\text{different frequency}}(1-S_{11_i})\sin(\omega_i t + \phi_i)
+$$
+$$
+\text{Receive Signal} = \sum_{\text{different frequency}} S_{21_i}\sin(\omega_i t + \phi_i)
+$$
+
+Also, when we don't set any reflection material in front of the VNA, there are still $S_{21}$ parameter which we called initial $S_{21}$ parameter, which represents the base energy we can get from the port2. When we calculate the receive signal, we need to minus the base energy and because we adjust deferent frequencies signal's magnitude to fit gaussian pulse, considering these two factors, here's the adjusted formula we calculate the receive signal,
+
+$$
+\text{Receive Signal} = \sum_{\text{different frequency}} A_{\text{gaussian}} \cdot (S_{21_i} - S_{21_{\text{initial}}}) \cdot \sin(\omega_i t + \phi_i)
+$$
+
+## Procedure
+
+1. Start the VNA device.
+2. Set up the VNA measurement parameters.
+3. Set the metal plate at different distances from VNA, which are 5cm,10cm,15cm,20cm,25cm,30cm and , which means no reflection.
+4. Get S11 and S21 scattering parameter from the metal plate at different distances from VNA.
+5. Analyze raw data - scattering parameter.
+
+## Results
+
+### Scattering Parameters - S11, S21
+
+First we focus on the raw data we obtained, the scattering parameters, and cause we emit the adjusted magnitude signal to generate gaussian impulse, we also adjust our S21 parameter to match the adjustments of our launch.
+
+
+
Fig 5. S21 magnitude linear format, multiplied by the gaussian pulse adjust factor from Fig4
+
+Also, we will minus the base energy we can get from port2 by minus the $S_{21_{initial}}$, here's the result,
+
+
+
+
Fig 6. S21 magnitude linear format, multiplied by the gaussian pulse adjust factor from Fig4, then minus the base energy
+
+
+### Energy receiving graph for different distance
+
+
+Refer to Fig 6, the area under S21 curves for estimating the energy received at Port2, as illustrated in Fig 7. The values are given in Table 2.
+
+
+
Fig 7. Area under S21 liner curve of different distance
+
+
+
Table 2. The AUC value of S21 liner curve of different distance, normalization to 0 - 1
+
+
+
Distance
+
AUC
+
+
+
+
5cm
+
1
+
+
+
10cm
+
0.56055976
+
+
15cm
+
0.32978441
+
+
20cm
+
0.17302525
+
+
+
25cm
+
0.16454077
+
+
+
30cm
+
0.11857247
+
+
+
35cm
+
0.08575671
+
+
+
40cm
+
0.06632118
+
+
+
+
+We then used the data from this table to perform polynomial fitting using a grid search method with the MSE (mean squared error) metric as a criterion, as well as five-fold cross-validation, and finally obtained the best result for polynomial fitting as shown below:
+
+
+
Fig 7. Best polynomial function fit for AUC vs. Distance
+
+The best fitting function is,
+
+$$
+AUC = -5.37 \times 10^{-5}d^3 + 4.83 \times 10^{-3} d^2 -1.46 \times 10^{-1} d
+$$
+We can get distance from this curve function by solving this cubic equation. The fitting assessment result is $MSE = 2.68\times 10^{-4}$ and $r^2 = 0.997$
+
+### Receiving Signal Observation
+
+
+
+
Fig 8. Signal received for one cycle in different distance
+
+With the S21 parameter and the previously calculated factor, we finally calculate that the signal received by port2, at a minimum frequency of 100kHZ for one cycle. We can get this signals $peak \ peak \ value = max \ value - min \ value$,
+
+
+
Fig 9. The Peak-Peak value in receiving signal for different distance
+
+## Conclusion and Problem
+
+### Conclusion
+
+* Echo UWB signal is capable of range detecting
+ * By estimating the energy of received signal, the system is able to identify the distance of the metal obstacle ranging 5cm-40cm.
+ * In accordance with the third degree polynomial law, the energy of the received echo signal decays rapidly within 20 cm, and then the energy decay tends to level off.
+
+### Problem
+
+* Is the present simulation protocol of emitting/receiving UWB signal reasonable? Are there optimum method for UWB signal simulation?
+* The **lack of feed-forward circuitry** in VNAs, compared to mature UWB signal transmitting devices, has a significant impact on the strength of both the emitted and received. As a result, the S21 value in Fig 6 is only about 0.35, far below 0.70.
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/VNA_research.md b/content/research_career/UWB_about/report/VNA_research.md
new file mode 100644
index 000000000..396bd2162
--- /dev/null
+++ b/content/research_career/UWB_about/report/VNA_research.md
@@ -0,0 +1,56 @@
+---
+title: Vector Network Analysis - Review
+tags:
+ - UWB
+---
+# Brief Introduction
+
+矢量网络分析仪(VNA)是一款用于高频通信设备性能测试和故障排除的矢量仪器。
+# Working Theorem
+
+*VNA通过向待测试设备(被测器件)发送射频信号,并测量信号在回传时的反射和传输特性来评估被测设备的性能*。VNA的核心部件是一对**方向耦合器**,分别连接信号源、接收器和待测设备。方向耦合器的作用是将发送信号和接收信号分开,确保信号不会直接从发送器到达接收器,以准确地测量信号的反射和传输特性。
+
+
+
+
+## 反射参数和传输特性
+$$
+Γ = \frac{V_{reflected}}{V_{incident}} = \frac{Z_L - Z_0}{Z_L + Z_0}
+$$
+
+$$
+T = \frac{V_{transmitted}}{V_{incident}}
+$$
+
+
+
+
+## Demo picture
+
+
+# Key Parameters
+
+1. 频率范围:VNA的频率范围决定了其能够测量的信号频率范围。不同的VNA具有不同的频率范围,用户需要根据需求选择适合的设备。
+
+2. 功率范围:VNA的功率范围表示其可以测量的信号功率范围。不同的应用可能需要不同的功率范围,因此在选择VNA时需要考虑功率要求。
+
+3. 分辨率带宽:VNA的分辨率带宽是指在频率域中每个测量点之间的频率间隔。较小的分辨率带宽可以提供更精确的测量结果,但需要更长的测量时间。
+
+4. 动态范围:VNA的动态范围是指在测量过程中可以测量到的最大和最小信号功率之间的差值。较高的动态范围表示VNA可以测量较小的信号功率和较大的信号功率,这对于测量弱信号和强信号场景非常重要。
+
+5. **S参数**:S参数是VNA最常用的一种测量参数,用于描述设备在不同端口之间的散射特性。S参数包括幅度和相位信息,可用于评估设备的匹配性、反射损耗、传输损耗等。
+
+6. 噪声系数:VNA的噪声系数是指其在测量过程中引入的噪声对测量结果的影响程度。较低的噪声系数意味着VNA可以提供更准确和可靠的测量结果。
+
+
+# What We need
+
+* 有着ultra wide band的VNA
+* UWB天线
+* 可能需要的阻抗匹配的适配器
+
+# Reference
+
+* https://www.electronics-notes.com/articles/test-methods/rf-vector-network-analyzer-vna/what-is-a-vna.php
+* https://zhuanlan.zhihu.com/p/509811532
+* https://download.tek.com/document/85T_60918_0_Tek_VNA_PR_05.pdf
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/attachments/02156794ddb5d9dbb7ca91e3965f6db.jpg b/content/research_career/UWB_about/report/attachments/02156794ddb5d9dbb7ca91e3965f6db.jpg
new file mode 100644
index 000000000..901fed388
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/02156794ddb5d9dbb7ca91e3965f6db.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/0f51b9ad16d2735614bff788e55dda5.jpg b/content/research_career/UWB_about/report/attachments/0f51b9ad16d2735614bff788e55dda5.jpg
new file mode 100644
index 000000000..0d1fe2980
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/0f51b9ad16d2735614bff788e55dda5.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/3a3cabdf08d107b7fe5086c7379525b.jpg b/content/research_career/UWB_about/report/attachments/3a3cabdf08d107b7fe5086c7379525b.jpg
new file mode 100644
index 000000000..87fd328a1
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/3a3cabdf08d107b7fe5086c7379525b.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/423a04e662206d11b2a178eeaf326dd.jpg b/content/research_career/UWB_about/report/attachments/423a04e662206d11b2a178eeaf326dd.jpg
new file mode 100644
index 000000000..ec858acea
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/423a04e662206d11b2a178eeaf326dd.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/433a119c3e1f83e7ea7157832975943 1.jpg b/content/research_career/UWB_about/report/attachments/433a119c3e1f83e7ea7157832975943 1.jpg
new file mode 100644
index 000000000..8fc154fcf
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/433a119c3e1f83e7ea7157832975943 1.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/433a119c3e1f83e7ea7157832975943.jpg b/content/research_career/UWB_about/report/attachments/433a119c3e1f83e7ea7157832975943.jpg
new file mode 100644
index 000000000..8fc154fcf
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/433a119c3e1f83e7ea7157832975943.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/5719cb6122c3ba1bef2738606b8214d.jpg b/content/research_career/UWB_about/report/attachments/5719cb6122c3ba1bef2738606b8214d.jpg
new file mode 100644
index 000000000..ec706962e
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/5719cb6122c3ba1bef2738606b8214d.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/5b511d3f0eeabedf9b38e52ade571fb.jpg b/content/research_career/UWB_about/report/attachments/5b511d3f0eeabedf9b38e52ade571fb.jpg
new file mode 100644
index 000000000..f712ff154
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/5b511d3f0eeabedf9b38e52ade571fb.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/70e99e67031e11825b0e3120335ddb0.png b/content/research_career/UWB_about/report/attachments/70e99e67031e11825b0e3120335ddb0.png
new file mode 100644
index 000000000..6c62ed262
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/70e99e67031e11825b0e3120335ddb0.png differ
diff --git a/content/research_career/UWB_about/report/attachments/8087296672da2c599b091554f464a57.jpg b/content/research_career/UWB_about/report/attachments/8087296672da2c599b091554f464a57.jpg
new file mode 100644
index 000000000..1c395b0ce
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/8087296672da2c599b091554f464a57.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/9ddbdef14232761547151d48b7ca61b.jpg b/content/research_career/UWB_about/report/attachments/9ddbdef14232761547151d48b7ca61b.jpg
new file mode 100644
index 000000000..5a333d873
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/9ddbdef14232761547151d48b7ca61b.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/9f2cf6473d35c69e8facce303d73739.jpg b/content/research_career/UWB_about/report/attachments/9f2cf6473d35c69e8facce303d73739.jpg
new file mode 100644
index 000000000..1c1fb4ae5
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/9f2cf6473d35c69e8facce303d73739.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 1.png b/content/research_career/UWB_about/report/attachments/Figure_1 1.png
new file mode 100644
index 000000000..940a18caf
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 1.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 2.png b/content/research_career/UWB_about/report/attachments/Figure_1 2.png
new file mode 100644
index 000000000..3c8fdc543
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 2.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 3.png b/content/research_career/UWB_about/report/attachments/Figure_1 3.png
new file mode 100644
index 000000000..b0d14d4a4
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 3.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 4.png b/content/research_career/UWB_about/report/attachments/Figure_1 4.png
new file mode 100644
index 000000000..2c78a566f
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 4.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 5.png b/content/research_career/UWB_about/report/attachments/Figure_1 5.png
new file mode 100644
index 000000000..39ace7e0a
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 5.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 6.png b/content/research_career/UWB_about/report/attachments/Figure_1 6.png
new file mode 100644
index 000000000..93868408a
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 6.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1 7.png b/content/research_career/UWB_about/report/attachments/Figure_1 7.png
new file mode 100644
index 000000000..f9bfa6998
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1 7.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Figure_1.png b/content/research_career/UWB_about/report/attachments/Figure_1.png
new file mode 100644
index 000000000..cd7935864
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Figure_1.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240117155955.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240117155955.png
new file mode 100644
index 000000000..9711e64a3
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240117155955.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240117160051.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240117160051.png
new file mode 100644
index 000000000..1e0618911
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240117160051.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240117160119.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240117160119.png
new file mode 100644
index 000000000..1e0618911
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240117160119.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240117161234.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240117161234.png
new file mode 100644
index 000000000..7b4ed5b6b
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240117161234.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240117161826.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240117161826.png
new file mode 100644
index 000000000..3f57af120
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240117161826.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240117162209.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240117162209.png
new file mode 100644
index 000000000..ae490c996
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240117162209.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240123212654.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240123212654.png
new file mode 100644
index 000000000..41e21b17a
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240123212654.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240123215140.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240123215140.png
new file mode 100644
index 000000000..65a92fd0e
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240123215140.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240123220530.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240123220530.png
new file mode 100644
index 000000000..9d030af51
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240123220530.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240124144133.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240124144133.png
new file mode 100644
index 000000000..98e8a644b
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240124144133.png differ
diff --git a/content/research_career/UWB_about/report/attachments/Pasted image 20240124144847.png b/content/research_career/UWB_about/report/attachments/Pasted image 20240124144847.png
new file mode 100644
index 000000000..5cf797e28
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/Pasted image 20240124144847.png differ
diff --git a/content/research_career/UWB_about/report/attachments/ad2ea92696689e9e923adc3dd45d696 1.jpg b/content/research_career/UWB_about/report/attachments/ad2ea92696689e9e923adc3dd45d696 1.jpg
new file mode 100644
index 000000000..53cb53952
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/ad2ea92696689e9e923adc3dd45d696 1.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/ad2ea92696689e9e923adc3dd45d696.jpg b/content/research_career/UWB_about/report/attachments/ad2ea92696689e9e923adc3dd45d696.jpg
new file mode 100644
index 000000000..53cb53952
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/ad2ea92696689e9e923adc3dd45d696.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/b4bb769d9dd67125e3571897d577036.png b/content/research_career/UWB_about/report/attachments/b4bb769d9dd67125e3571897d577036.png
new file mode 100644
index 000000000..a31e88ed2
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/b4bb769d9dd67125e3571897d577036.png differ
diff --git a/content/research_career/UWB_about/report/attachments/b902f9c023b000eb6413aa1649ed201 1.jpg b/content/research_career/UWB_about/report/attachments/b902f9c023b000eb6413aa1649ed201 1.jpg
new file mode 100644
index 000000000..5daaea849
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/b902f9c023b000eb6413aa1649ed201 1.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/b902f9c023b000eb6413aa1649ed201.jpg b/content/research_career/UWB_about/report/attachments/b902f9c023b000eb6413aa1649ed201.jpg
new file mode 100644
index 000000000..5daaea849
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/b902f9c023b000eb6413aa1649ed201.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/c081ff3279755e8e6c176e4255d97c7.jpg b/content/research_career/UWB_about/report/attachments/c081ff3279755e8e6c176e4255d97c7.jpg
new file mode 100644
index 000000000..a307d10d5
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/c081ff3279755e8e6c176e4255d97c7.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/c2354d770a165663e67141645ad9b00.png b/content/research_career/UWB_about/report/attachments/c2354d770a165663e67141645ad9b00.png
new file mode 100644
index 000000000..55638f692
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/c2354d770a165663e67141645ad9b00.png differ
diff --git a/content/research_career/UWB_about/report/attachments/df6821abfc09f5337071d9ed4d76a6f 1.png b/content/research_career/UWB_about/report/attachments/df6821abfc09f5337071d9ed4d76a6f 1.png
new file mode 100644
index 000000000..9204597d0
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/df6821abfc09f5337071d9ed4d76a6f 1.png differ
diff --git a/content/research_career/UWB_about/report/attachments/df6821abfc09f5337071d9ed4d76a6f.png b/content/research_career/UWB_about/report/attachments/df6821abfc09f5337071d9ed4d76a6f.png
new file mode 100644
index 000000000..9204597d0
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/df6821abfc09f5337071d9ed4d76a6f.png differ
diff --git a/content/research_career/UWB_about/report/attachments/e4c90ae3230c59cf8435ee16806c3eb.jpg b/content/research_career/UWB_about/report/attachments/e4c90ae3230c59cf8435ee16806c3eb.jpg
new file mode 100644
index 000000000..e2a14b705
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/e4c90ae3230c59cf8435ee16806c3eb.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/e62823e6aac8e6136052633bc485364.jpg b/content/research_career/UWB_about/report/attachments/e62823e6aac8e6136052633bc485364.jpg
new file mode 100644
index 000000000..275f7c84c
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/e62823e6aac8e6136052633bc485364.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/eddfb199412aa302a244ecc09588006 1.jpg b/content/research_career/UWB_about/report/attachments/eddfb199412aa302a244ecc09588006 1.jpg
new file mode 100644
index 000000000..c64288479
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/eddfb199412aa302a244ecc09588006 1.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/eddfb199412aa302a244ecc09588006.jpg b/content/research_career/UWB_about/report/attachments/eddfb199412aa302a244ecc09588006.jpg
new file mode 100644
index 000000000..c64288479
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/eddfb199412aa302a244ecc09588006.jpg differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 1.png b/content/research_career/UWB_about/report/attachments/untitled 1.png
new file mode 100644
index 000000000..f102a6b6b
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 1.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 10.png b/content/research_career/UWB_about/report/attachments/untitled 10.png
new file mode 100644
index 000000000..1035185bc
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 10.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 11.png b/content/research_career/UWB_about/report/attachments/untitled 11.png
new file mode 100644
index 000000000..e366341f6
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 11.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 12.png b/content/research_career/UWB_about/report/attachments/untitled 12.png
new file mode 100644
index 000000000..9de3a605b
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 12.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 13.png b/content/research_career/UWB_about/report/attachments/untitled 13.png
new file mode 100644
index 000000000..5eb10fe1e
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 13.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 14.png b/content/research_career/UWB_about/report/attachments/untitled 14.png
new file mode 100644
index 000000000..2ecc27d59
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 14.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 15.png b/content/research_career/UWB_about/report/attachments/untitled 15.png
new file mode 100644
index 000000000..7fb2f31ef
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 15.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 16.png b/content/research_career/UWB_about/report/attachments/untitled 16.png
new file mode 100644
index 000000000..893f43fc2
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 16.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 2.png b/content/research_career/UWB_about/report/attachments/untitled 2.png
new file mode 100644
index 000000000..798e61960
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 2.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 3.png b/content/research_career/UWB_about/report/attachments/untitled 3.png
new file mode 100644
index 000000000..5bf16a6d3
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 3.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 4.png b/content/research_career/UWB_about/report/attachments/untitled 4.png
new file mode 100644
index 000000000..5bf16a6d3
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 4.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 5.png b/content/research_career/UWB_about/report/attachments/untitled 5.png
new file mode 100644
index 000000000..0625bd7c8
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 5.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 6.png b/content/research_career/UWB_about/report/attachments/untitled 6.png
new file mode 100644
index 000000000..12b8fd623
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 6.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 7.png b/content/research_career/UWB_about/report/attachments/untitled 7.png
new file mode 100644
index 000000000..50b0e7444
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 7.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 8.png b/content/research_career/UWB_about/report/attachments/untitled 8.png
new file mode 100644
index 000000000..455575a9d
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 8.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled 9.png b/content/research_career/UWB_about/report/attachments/untitled 9.png
new file mode 100644
index 000000000..2356c86ec
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled 9.png differ
diff --git a/content/research_career/UWB_about/report/attachments/untitled.png b/content/research_career/UWB_about/report/attachments/untitled.png
new file mode 100644
index 000000000..dd49bcdad
Binary files /dev/null and b/content/research_career/UWB_about/report/attachments/untitled.png differ
diff --git a/content/research_career/UWB_about/report/flight_time_calculation.md b/content/research_career/UWB_about/report/flight_time_calculation.md
new file mode 100644
index 000000000..2c56ae539
--- /dev/null
+++ b/content/research_career/UWB_about/report/flight_time_calculation.md
@@ -0,0 +1,107 @@
+---
+title: 飞行时间测算
+tags:
+ - signal
+ - signal-processing
+ - VNA
+ - UWB
+---
+
+
+# 飞行时间测算
+
+## Plan A - 使用Group Delay测算飞行时间
+
+
+### Group Delay
+
+在电信和信号处理领域,群时延(group delay)通常是指信号通过系统时所引起的相位变化与信号频率之间的关系。群时延可以用来描述系统对不同频率信号的相位响应,尤其在宽带通信系统中很有用。
+
+计算群时延的一种常见方法是通过相位对频率的导数。具体地说,对于一个复数传输函数H(f),其相位为φ(f),群时延τ可以通过以下公式计算:
+
+$$
+\tau(f) = -\frac{d\phi(f)}{d f}
+$$
+
+### 得到的结果
+
+
+
+
+可以看出来,因为相位变换存在着拐点,所以计算得到的group delay在拐点处的导数计算会出现问题,因此我们需要对拐点处的导数计算采取特殊的处理,本次使用的方法是直接对Group Delay的曲线使用中值滤波排除这些拐点的特异值
+
+特异值排除方法:
+
+```python
+def sp_detect(signal, window_size=5):
+
+ for i in range(window_size, len(signal) - window_size):
+
+ window_mean = 0
+
+ for j in range(i - window_size, i):
+ window_mean += abs(signal[j])
+
+ for j in range(i + 1, i + window_size + 1):
+ window_mean += abs(signal[j])
+
+ window_mean = window_mean / (2 * window_size)
+
+ replace_mean = 0
+
+ for j in range(i - window_size, i):
+ replace_mean += signal[j]
+
+ for j in range(i + 1, i + window_size + 1):
+ replace_mean += signal[j]
+
+ replace_mean = replace_mean / (2 * window_size)
+
+ if abs(signal[i]) > 3 * window_mean:
+
+ signal[i] = replace_mean
+
+ return signal
+```
+
+
+排除特异点后,得到的结果如下图:
+
+
+
+
+
+然后我们**取Group Delay的曲线的中位数并减去耦合信号的Group Delay中位数当作本距离下的Delay time**,画出Delay time和距离的关系如下:
+
+
+
+
+可以看出来,从Group Delay的路线方法来得到飞行时间的测算方案并没有行得通,这可能的原因有以下几点:
+
+1. **多路径和多模式:** 如果系统存在多条传输路径或多个传输模式,群时延可能会受到这些因素的影响。
+2. **相位不稳定性:** VNA测量中的相位信息可能会受到测量系统中的相位不稳定性的影响,这可能影响群时延的准确性。
+3. **非线性效应:** 在某些情况下,非线性效应可能导致信号频率的变化,从而影响相位。
+4. 群延时和飞行时间关系理解仍不到位,并且算法处理可能并不科学
+
+准备改进的点:
+
+1. **使用窗函数聚焦于特定频段下的群延时**,目前尚不清楚是群延时曲线稳定处可以体现飞行时间还是在出现干扰处可以体现
+
+
+
+## Plan B - 考察单一频段下相位变化估算飞行时间
+
+
+
+
+
+
+
+以mutual coupling signal为基准来计算飞行时间,最终得到的结果:
+
+
+
+
+
+
+这个方法还需要继续完善,同时方案一导致的误差对这个方案的影响是相同的
\ No newline at end of file
diff --git a/content/research_career/UWB_about/report/基于高频信号大趋势包络提取后的信号关键点提取.md b/content/research_career/UWB_about/report/基于高频信号大趋势包络提取后的信号关键点提取.md
new file mode 100644
index 000000000..2803f4354
--- /dev/null
+++ b/content/research_career/UWB_about/report/基于高频信号大趋势包络提取后的信号关键点提取.md
@@ -0,0 +1,103 @@
+---
+title: Detect Signal Impulse Point by High Frequency Signal Envelope Method
+tags:
+ - report
+ - envelope
+---
+
+# 问题
+
+高频信号细节过于丰富导致传统Hilbert变换求包络的结果过于贴合曲线,使得变化趋势没有得到有效简化
+
+# 包络求解方法
+
+本次任务使用的包络求解方法基于MATLAB envelope函数,其方法一共有三种:
+
+## Hilbert变换
+
+$$
+H(\mu)(t) = \frac{1}{\pi} \text{p.v.} \int_{\infty}^{\infty} \frac{\mu(t)}{t-\tau}d\tau
+$$
+
+
+
+```MATLAB
+analytical = hilbert(signal)
+envelope = abs(analytical)
+```
+
+Hilbert变换将实信号变换为分析信号来研究信号的瞬时振幅和瞬时相位,其可以画出非常贴合曲线的包络,这个方法没有参数可以调节,效果如下:
+
+
+
+放大细节图如下:
+
+
+
+其结果在细节处处理的包络效果良好,但是从宏观图景上并没有很好
+
+
+## Hilbert Filter
+
+```MATLAB
+[yupper, ylower] = envelope(x, fl, 'analytic')
+```
+
+Hilbert Filter的方法是用长度为 fl (filter length) 的希尔伯特 FIR 滤波器对 x 进行滤波,计算出解析信号,这样的方法可以使得包络更加贴合曲线细节变化,随着fl的增大,结果越来越接近于Hilbert变换的方法,不符合我们的需求,效果如图:
+
+
+
+## RMS
+
+
+RMS(Root Mean Square)是一种用于测量信号能量的方法,它可以用于音频处理中的各种应用,包括音频信号的包络提取。RMS包络表示信号在一段时间内的均方根值,通常用于衡量信号的整体能量而不受信号的波形变化影响。
+
+RMS包络的计算步骤如下:
+
+1. 平方:对信号的每个样本值进行平方运算。
+2. 平均: 对平方后的值进行平均操作,通常是对一定时间窗口内的值进行平均。
+3. 开方:对平均值进行开方运算,得到均方根值。
+
+滑动窗口后得到RMS包络;
+
+```MATLAB
+[yupper, ylower] = envelope(x, wl, 'rms')
+```
+
+MATLAB使用RMS法求包络时,窗口之间的重叠量参数并不可以调节
+
+其结果如下:
+
+
+
+结果显示,随着RMS的窗口越长,其结果越接近于Hilbert变换算法,其结果仍然不满足我们的需求。
+
+
+## Peaks
+
+```MATLAB
+[yupper, ylower] = envelope(x, np, 'peak')
+```
+
+ 这个方法包络线是通过对至少相隔 np 个样本的局部最大值进行样条插值确定的。
+
+测试不同np参数对我们结果的影响:
+
+
+
+
+显然,Peaks的算法非常简单直白,随着np的增大,包络会越来越粗线条。这个方案可能可以解决我们的问题。
+
+
+## Hilbert变换 + 低通滤波器
+
+在Hilbert变换的基础上,调节低通滤波器截止频率和阶数这两个参数来达到较为良好的平滑效果,如下:
+
+
+
+这个包络方法也可能对我们找到关键点有帮助
+
+
+# 结论
+
+后续通过以上方法计算包络后,按照峰值点作为时间计算关键点去计算时间差,来求得距离,但是结果仍然出现错误。
diff --git a/content/research_career/attachments/CN101267424A.pdf b/content/research_career/attachments/CN101267424A.pdf
new file mode 100644
index 000000000..f7ab81a8b
Binary files /dev/null and b/content/research_career/attachments/CN101267424A.pdf differ
diff --git a/content/research_career/attachments/Eveleigh_Eric_A_2020July_MASc.pdf b/content/research_career/attachments/Eveleigh_Eric_A_2020July_MASc.pdf
new file mode 100644
index 000000000..d9980ef5e
Binary files /dev/null and b/content/research_career/attachments/Eveleigh_Eric_A_2020July_MASc.pdf differ
diff --git a/content/research_career/attachments/Figure_1 1.png b/content/research_career/attachments/Figure_1 1.png
new file mode 100644
index 000000000..3e8f59661
Binary files /dev/null and b/content/research_career/attachments/Figure_1 1.png differ
diff --git a/content/research_career/attachments/Figure_1 2.png b/content/research_career/attachments/Figure_1 2.png
new file mode 100644
index 000000000..6acfefb77
Binary files /dev/null and b/content/research_career/attachments/Figure_1 2.png differ
diff --git a/content/research_career/attachments/Figure_1 3.png b/content/research_career/attachments/Figure_1 3.png
new file mode 100644
index 000000000..3a0324956
Binary files /dev/null and b/content/research_career/attachments/Figure_1 3.png differ
diff --git a/content/research_career/attachments/Figure_1 4.png b/content/research_career/attachments/Figure_1 4.png
new file mode 100644
index 000000000..5f9cf68fd
Binary files /dev/null and b/content/research_career/attachments/Figure_1 4.png differ
diff --git a/content/research_career/attachments/Figure_1.png b/content/research_career/attachments/Figure_1.png
new file mode 100644
index 000000000..e07f0fe80
Binary files /dev/null and b/content/research_career/attachments/Figure_1.png differ
diff --git a/content/research_career/attachments/Figure_2.png b/content/research_career/attachments/Figure_2.png
new file mode 100644
index 000000000..9ed22a18f
Binary files /dev/null and b/content/research_career/attachments/Figure_2.png differ
diff --git a/content/research_career/attachments/Figure_3.png b/content/research_career/attachments/Figure_3.png
new file mode 100644
index 000000000..63a83d283
Binary files /dev/null and b/content/research_career/attachments/Figure_3.png differ
diff --git a/content/research_career/attachments/L-G-0000753449-0002366653.pdf b/content/research_career/attachments/L-G-0000753449-0002366653.pdf
new file mode 100644
index 000000000..b8759bb6f
Binary files /dev/null and b/content/research_career/attachments/L-G-0000753449-0002366653.pdf differ
diff --git a/content/research_career/attachments/Pasted image 20230924222031.png b/content/research_career/attachments/Pasted image 20230924222031.png
new file mode 100644
index 000000000..d85872c30
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20230924222031.png differ
diff --git a/content/research_career/attachments/Pasted image 20230924222536.png b/content/research_career/attachments/Pasted image 20230924222536.png
new file mode 100644
index 000000000..a9e0f5220
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20230924222536.png differ
diff --git a/content/research_career/attachments/Pasted image 20230924223212.png b/content/research_career/attachments/Pasted image 20230924223212.png
new file mode 100644
index 000000000..82373ab25
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20230924223212.png differ
diff --git a/content/research_career/attachments/Pasted image 20230924223332.png b/content/research_career/attachments/Pasted image 20230924223332.png
new file mode 100644
index 000000000..1273dbd02
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20230924223332.png differ
diff --git a/content/research_career/attachments/Pasted image 20230924223700.png b/content/research_career/attachments/Pasted image 20230924223700.png
new file mode 100644
index 000000000..1e3d8637a
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20230924223700.png differ
diff --git a/content/research_career/attachments/Pasted image 20231016082022.png b/content/research_career/attachments/Pasted image 20231016082022.png
new file mode 100644
index 000000000..c17c44b6d
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20231016082022.png differ
diff --git a/content/research_career/attachments/Pasted image 20231016082202.png b/content/research_career/attachments/Pasted image 20231016082202.png
new file mode 100644
index 000000000..0cc862ee1
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20231016082202.png differ
diff --git a/content/research_career/attachments/Pasted image 20231016091540.png b/content/research_career/attachments/Pasted image 20231016091540.png
new file mode 100644
index 000000000..3b0d1304d
Binary files /dev/null and b/content/research_career/attachments/Pasted image 20231016091540.png differ
diff --git a/content/research_career/attachments/Time Domain Analysis Using a Network Analyzer.pdf b/content/research_career/attachments/Time Domain Analysis Using a Network Analyzer.pdf
new file mode 100644
index 000000000..02f43e10d
Binary files /dev/null and b/content/research_career/attachments/Time Domain Analysis Using a Network Analyzer.pdf differ
diff --git a/content/research_career/attachments/Untitled-1.png b/content/research_career/attachments/Untitled-1.png
new file mode 100644
index 000000000..156b7a47e
Binary files /dev/null and b/content/research_career/attachments/Untitled-1.png differ
diff --git a/content/research_career/device/VNA_set_instruction.md b/content/research_career/device/VNA_set_instruction.md
new file mode 100644
index 000000000..d8ce29ca8
--- /dev/null
+++ b/content/research_career/device/VNA_set_instruction.md
@@ -0,0 +1,31 @@
+---
+title: VNA simulate UWB echo signal setup
+tags:
+ - devices
+ - hardware
+ - VNA
+ - UWB
+---
+
+# Step by Step (Simple Overview)
+
+1. 连接天线到Port1, Port2
+2. VNA背部连接电源,前面开机键开机
+3. 点开桌面VNA应用
+4. 设置以下参数:
+ 1. Sweep Setup:
+ 1. 扫描点数由201 -> 10001
+ 2. Simulation:
+ 1. Start Freq调节为100kHz
+ 3. Average:
+ 1. Factor从16调节成128
+ 2. Average Trigger —— off -> on
+ 3. Averaging —— off -> on
+ 4. Display
+ 1. Num of Trace从1调成2
+ 2. 返回root目录,点击蓝色的Trace2, 将S11调成S21
+5. 准备开始测量,进入Trigger
+ 1. 模式选择continuous预热
+ 2. 选择single,等待指示灯熄灭
+6. 点击save
+ 1. save Snp -> save S2P
diff --git a/content/research_career/device/attachments/15a688c916c8e3d62c9cd1b86f7699a.jpg b/content/research_career/device/attachments/15a688c916c8e3d62c9cd1b86f7699a.jpg
new file mode 100644
index 000000000..9c1904030
Binary files /dev/null and b/content/research_career/device/attachments/15a688c916c8e3d62c9cd1b86f7699a.jpg differ
diff --git a/content/research_career/device/attachments/Pasted image 20240119155428.png b/content/research_career/device/attachments/Pasted image 20240119155428.png
new file mode 100644
index 000000000..5f047e520
Binary files /dev/null and b/content/research_career/device/attachments/Pasted image 20240119155428.png differ
diff --git a/content/research_career/device/attachments/f09249c5b3281e8679dfe5235f1138a.jpg b/content/research_career/device/attachments/f09249c5b3281e8679dfe5235f1138a.jpg
new file mode 100644
index 000000000..97e951c7f
Binary files /dev/null and b/content/research_career/device/attachments/f09249c5b3281e8679dfe5235f1138a.jpg differ
diff --git a/content/research_career/device/infiniiMax_probes.md b/content/research_career/device/infiniiMax_probes.md
new file mode 100644
index 000000000..868ff0015
--- /dev/null
+++ b/content/research_career/device/infiniiMax_probes.md
@@ -0,0 +1,6 @@
+---
+title: InfiniiMax Probes
+tags:
+ - devices
+ - equipment
+---
diff --git a/content/research_career/device/real_time_vs_sampling_oscilloscope.md b/content/research_career/device/real_time_vs_sampling_oscilloscope.md
new file mode 100644
index 000000000..163e233f3
--- /dev/null
+++ b/content/research_career/device/real_time_vs_sampling_oscilloscope.md
@@ -0,0 +1,7 @@
+---
+title: Real Time vs. Sampling Oscilloscopes
+tags:
+ - devices
+ - equipment
+ - signal
+---
diff --git a/content/research_career/device/stm_load_note.md b/content/research_career/device/stm_load_note.md
new file mode 100644
index 000000000..37c59bd81
--- /dev/null
+++ b/content/research_career/device/stm_load_note.md
@@ -0,0 +1,19 @@
+---
+title: Download Code to STM32F103C8T6
+tags:
+ - devices
+ - hardware
+ - equipment
+---
+# ST-link Connection
+
+
+
+
+
+
+
+* Red —— TVCC
+* Orange —— TMS
+* Black —— Ground
+* White —— TCK
\ No newline at end of file
diff --git a/content/research_career/papers_read.md b/content/research_career/papers_read.md
new file mode 100644
index 000000000..6d787f5fc
--- /dev/null
+++ b/content/research_career/papers_read.md
@@ -0,0 +1,10 @@
+---
+title: Papers Read
+tags:
+ - work-about
+ - research-about
+---
+
+* [Papers Read in 2023.10](research_career/papers_read/papers_2023_10.md)
+* [Papers Read in 2023.11](research_career/papers_read/papers_2023_11.md)
+* [Papers Read About Pressure Injury, in Winter Holiday](research_career/papers_read/2024_winter_holiday/papers_read_about_pressure_injury.md)
\ No newline at end of file
diff --git a/content/research_career/papers_read/2024_winter_holiday/PI_overview.md b/content/research_career/papers_read/2024_winter_holiday/PI_overview.md
new file mode 100644
index 000000000..294a4a10f
--- /dev/null
+++ b/content/research_career/papers_read/2024_winter_holiday/PI_overview.md
@@ -0,0 +1,16 @@
+---
+title: A systematic review of pressure injury 压疮综述
+tags:
+ - pressure-injury
+ - overview
+---
+# A systematic review of pressure injury 压疮综述
+
+## 压疮基本情况(Introduction)
+
+
+由于活动能力差、局部压力、循环状况和其他诱发因素,患者在医院发生压力性损伤 、压疮(Pressure Injury, PI)。
+
+
+
+
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220005010.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220005010.png
new file mode 100644
index 000000000..6630ca11f
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220005010.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220005029.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220005029.png
new file mode 100644
index 000000000..2c6ccbbd1
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220005029.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220163106.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220163106.png
new file mode 100644
index 000000000..201f95b32
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220163106.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220163117.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220163117.png
new file mode 100644
index 000000000..201f95b32
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Pasted image 20240220163117.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 15-35-18.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 15-35-18.png
new file mode 100644
index 000000000..e4b53c844
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 15-35-18.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-31-11 1.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-31-11 1.png
new file mode 100644
index 000000000..bbea7c65c
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-31-11 1.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-31-11.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-31-11.png
new file mode 100644
index 000000000..bbea7c65c
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-31-11.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-32-00.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-32-00.png
new file mode 100644
index 000000000..210be6a0d
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 16-32-00.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 17-11-27.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 17-11-27.png
new file mode 100644
index 000000000..a66176d02
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 17-11-27.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 17-50-52.png b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 17-50-52.png
new file mode 100644
index 000000000..faa3b353f
Binary files /dev/null and b/content/research_career/papers_read/2024_winter_holiday/attachments/Screenshot from 2024-02-20 17-50-52.png differ
diff --git a/content/research_career/papers_read/2024_winter_holiday/papers_read_about_pressure_injury.md b/content/research_career/papers_read/2024_winter_holiday/papers_read_about_pressure_injury.md
new file mode 100644
index 000000000..9354bb68b
--- /dev/null
+++ b/content/research_career/papers_read/2024_winter_holiday/papers_read_about_pressure_injury.md
@@ -0,0 +1,335 @@
+---
+title: 关于压疮(pressure injury)相关的文献阅读
+tags:
+ - papers
+ - pressure-injury
+---
+
+
+## A Comprehensive and Improved Definition for Hospital-Acquired Pressure Injury Classification Based on Electronic Health Records: Comparative Study
+
+### DOI
+
+http://dx.doi.org/10.2196/40672
+
+### Publication Date
+
+2023.02.23
+
+### Abstract
+
+* Hospitial-acquired pressure injuries(HAPI)是一个重要的护理指标(Nursing Metric),是医院中最常见的可预防事件。
+* 文章想要通过电子健康纪录(electronic health records, EHRs),构建机器学习模型来识别和预测HAPI。
+* 目前存在的问题是,
+ * 准确的模型依赖于高质量的HAPI数据标签,然而,**EHR中的不同数据源可能会提供有关于患者发生HAPI的矛盾信息**。
+ * 现有的 HAPI 定义彼此不一致,即使在同一患者群体中也是如此。不一致的标准使得无法对机器学习方法进行**基准测试**来预测 HAPI。
+* 该文章有着三个目标:
+ * 识别EHR中HAPI来源的差异
+ * 使用所有 EHR 来源的数据制定 HAPI 分类的全面定义
+ * 说明改进 HAPI 定义的重要性
+* 该文章使用的方法:评估了MIMIC-III数据库中的临床记录、诊断代码(diagnosis code)、程序代码和图表事件中记录的 HAPI 事件之间的一致性。我们分析了 3 个现有 HAPI 定义所使用w的标准及其对监管指南的遵守情况。提出了Emory HAPI(EHAPI),这是一个改进的、更全面的HAPI定义。然后,我们使用基于树的顺序神经网络分类器评估了标签在训练 HAPI 分类模型中的重要性。
+* 最终,文章说明了定义 HAPI 的复杂性,<13% 的住院患者在 4 个数据源中记录了至少 3 个 PI 指征。尽管图表事件是最常见的指标,但它是超过 49% 的停留时间的唯一 PI 文档。我们证明现有 HAPI 定义和 EHAPI 之间缺乏一致性,只有 219 个具有一致的正面标签。我们的分析强调了改进的 HAPI 定义的重要性,使用我们的标签训练的分类器优于护士标注和consensus set(既存在任何PI证据都会标注为阳性)
+
+> [!abstract]
+> 因为HAPI的定义不统一,文章通过使用大量不同EHR来源的数据重现制定HAPI分类的定义并提出EHAPI。通过EHAPI训练分类器会具有更好的性能 。
+
+
+### Tech Detail
+
+* Center for Medicare and Medicaid(CMS) and Healthcare Research and Quality(AHRQ)认为 HAPI 是“绝不会发生的事件”,即对医疗服务提供者造成严重经济处罚的事件。
+* 国家压力性损伤咨询小组 (National Pressure Injury Advisory Panel, NPIAP) 参考指南将设施获得率(facility-acquired)定义为“入院时未发生压力性损伤但在机构逗留期间发生压力性损伤的个体的百分比”。
+* Data Sources for PI in Hospital Stays
+ * **Chart Events**
+ * "chart events"(图表事件)是指医疗记录中的时间序列数据,用于描述患者在医院期间的各种观测值、监测值和测量值。
+ * "chart events"包括了多个类别的数据,如生理参数(如血压、心率、体温等)、实验室检查结果(如血液、尿液、生化检验、血药浓度等)、药物治疗(如药物剂量、给药途径等)等等。这些数据通过定期或不定期的观测、监测和测量获得,以在医疗记录中反映患者的病情和生理状态。
+ * **Notes**
+ * "notes"(笔记)是指医疗记录中的文字描述,其中包含了医生、护士或其他医疗专业人员对患者的病情、治疗方案、手术过程、医嘱等进行详细记录的文本。
+ * MIMIC-III数据库中的"notes"包括了各种类型的文本记录,如护理记录、病历摘要、手术报告、放射学报告、心电图诊断、社交史、家族史、既往史等。这些文本记录可提供丰富的信息,包括患者病情发展、诊断过程、治疗决策以及诊疗方案的执行情况等。
+ * **Diagnosis Codes**
+ * "diagnosis codes"(诊断代码)用于描述患者的疾病诊断,并提供了一种标准化的分类方法。诊断代码通常使用国际分类系统(如ICD-9-CM或ICD-10-CM)提供,并用于追踪、记录和统计患者的诊断信息。
+ * MIMIC-III数据库中的诊断代码包括了多种类型的诊断,如主要诊断、次要诊断、手术前诊断等。这些诊断代码能够提供对患者诊断情况的细致描述,包括疾病的类型、严重程度以及可能的并发症等。
+ * **Procedure Codes**
+ * Procedure Codes(手术代码)是用于描述患者接受的医疗过程或手术的分类标识符。这些代码用于记录患者在医疗过程中所接受的各种医疗和外科手术。手术代码通常由医学编码系统(比如ICD-9-CM或ICD-10-PCS)提供,并用于追踪和统计特定类型的医疗操作。
+ * 在MIMIC-III数据库中,Procedure Codes可用于分析和研究多个方面,例如手术类型、手术风险、手术后并发症等。
+* Ideal HAPI Criteria Based on Guidelines
+ * CMS provides several inclusion and exclusion criteria for HAPI
+ * 一项纳入标准是与入院相比,出院时存在一项或多项新的或恶化的 PI。
+ * 一项纳入标准是,unstageable PI -> staged
+ * 出院评估中缺少新的或恶化的 2、3 和 4 期或不可分期压疮(包括深部组织损伤)的数据,则为exclusion
+ * 患者死亡也被记为exclusion
+* Existing MIMIC-III HAPI Case Definitions and Their Limitations
+
+| Existing Methods for Definitions | Limitations | Reference |
+| ---- | ---- | ---- |
+| Recurrent additive network for temporal risk prediction(CANTRIP), 专注于预测 HAPI 首次出现 之前 48 至 96 小时或事件日期 (data of event, **DOE**) ,既入院后 48 小时以上首次在带时间戳的医院记录中提及 PI 相关关键词或 PI 分期图事件(≥ 1 期)。没有DOE的数据会被当做对照组 | 该研究中包括已故患者和治愈和改善的患者 | 10.1093/jamia/ocaa004 |
+| Cramer 等人试图利用前 24 小时的数据开发一种 PI 筛查工具。他们仅**使用入院 24 小时后发生的 PI 分期图事件来识别 HAPI 病例**。 | 它排除了 1 期 PI 以及“无法分期”和深部组织损伤 PI。与 CANTRIP 类似,Cramer 病例定义包括已故患者和治愈或改善的 PI。 | 10.5334/egems.307 |
+| Sotoodeh 等人探索了对临床文本使用否定预处理来检测 PI。病例患者是**使用国际疾病分类 (ICD)-9 代码或临床记录中 PI 特定关键字**来定义的。 | 死亡、治愈或改善的 PI 包含在病例定义;同时,没有考虑PI staging chart events | https://europepmc.org/abstract/MED/33936492 |
+| ... | ... | ... |
+* **EHAPI Case Definition in MIMIC-III** *Paper's Key Section*
+ 
+ * D: dimension
+ * DTI: deep tissue injury
+
+
+
+## A systematic review of predictive models for hospital-acquired pressure injury using machine learning
+
+### DOI
+
+10.1002/nop2.1429
+
+### Date
+
+2023.03
+
+### Abstract
+
+该文章为一篇综述,总结机器学习(ML)在医院获得性压力损伤(HAPI)预测中的应用,系统评估机器学习模型的性能和构建过程,为建立高质量的机器学习预测模型提供参考。
+
+文章主要统计的文章为2010 - 2021年在PubMed,Web of Science, Scopus Embase和CINHAL数据库中的文章。纳入标准为使用 **prediction model risk of bias assessment tool(PROBAST)**;最终从1793篇文章中有23项研究入选,样本范围在149-75353,PI发生率在0.5%-49.8%;
+
+目前尚存在的问题为*data management*(idk), data pre-processing, model validation
+
+最终目的是将ML集成到HAPI预测中是为了开发一种实用的临床决策工具。
+
+
+### Tech deail
+
+* PROBAST 用于评估偏倚风险以及诊断和预后预测模型研究的应用,其中包括四个领域(参与者、预测变量、结果和分析)总共 20 个问题。每个问题和领域的偏见风险可以回答为低、不清楚或高。—— https://doi.org/10.7326/M18-1376
+
+
+## A systematic review of movement monitoring devices to aid the prediction of pressure ulcers in at-risk adults
+
+### DOI
+
+10.1111/iwj.13902
+
+### Date
+
+2023.02
+
+### Abstract
+
+文章为一篇综述,探讨运动监测设备对成人压疮(PU)风险预测和预防的影响。文章纳入的标准为英语撰写,采用前瞻性设计,使用运动监测设备评估成年患者在床上的运动。
+
+综述使用 PubMed、CINAHL、Scopus、Cochrane 和 EMBASE 数据库,返回 1537 条记录,其中 25 条符合纳入标准。使用预先设计的提取工具提取数据,并使用循证图书馆管理(EBL)进行质量评估,发现这些研究中总共使用了 19 种不同的运动监测设备。
+
+使用这些运动监测设备重点是量化运动的数量和类型。在四项研究中,作者将监测系统与 pressure ulcars风险评估工具进行了比较,观察到各种高相关性和低相关性。
+
+## An eHealth System for Pressure Ulcer Risk Assessment Based on Accelerometer and Pressure Data
+
+### DOI
+
+10.1155/2015/106537
+
+### Date
+
+2015
+
+### Abstract
+
+基于现有的监测压疮危险因素的系统存在局限性,特别是对于中等风险的人,这篇研究文章提出了一种基于加速度计和压力数据的压疮风险评估的新型电子健康系统。 该系统**结合了加速度计和压力传感器的优点来监测压疮风险因素**。 系统的结构为:传感器检测躺在床垫上的人的重新定位,并将数据发送到平板电脑,在那里进行分析并以图形方式呈现。 该系统在具有中等压疮风险的人家中进行了长期测试评估。 结果表明,该系统能够检测人躺在床上时的运动,并且*运动能力与布雷登压疮风险之间存在微弱的相关性*。 该系统提供的图形说明可以帮助护理人员优化对具有中度至高压溃疡风险的人的护理。
+
+
+### Tech Detail
+
+
+
+
+## A fabric‑based wearable sensor for continuous monitoring of decubitus ulcer of subjects lying on a bed
+
+### DOI
+
+10.1038/s41598-023-33081-7
+
+### Date
+
+2023.04.08
+
+### Abstract
+
+本文讨论了基于织物的可穿戴传感器的开发,用于连续监测卧床患者的褥疮溃疡。 作者强调了**精确生理信号检测的必要性**,并解决了与*无线通信*、*软*、*非侵入性*和*一次性*的挑战。
+
+文章主要描述了一种传感器的开发,该传感器使用传统的模拟电路在一次性、基于透气织物的系统中进行无线通信。 该传感器测量压力、温度和皮肤阻抗,并将数据连续连续地发送到手机应用程序。 作者对健康受试者进行了试点测试,以评估传感器的无线操作。 基于织物的传感器成功测量了**施加的压力、皮肤温度和皮肤阻抗**。 文章强调了早期发现褥疮的重要性以及基于织物的传感器监测皮肤状况的潜力。
+
+
+## An Integrated System of Braden Scale and Random Forest Using Real‑Time Diagnoses to Predict When Hospital‑Acquired Pressure Injuries (Bedsores) Occur
+
+### DOI
+
+10.3390/ijerph20064911
+
+### Date
+
+2023-03-10
+
+### Abstract
+
+文章旨在开发随机森林 (RF) 和 Braden 量表的混合系统,以 预测医院获得性压力损伤 (HAPI) 发生的时间。
+
+之前的研究重点是预测谁将出现 HAPI,但本研究的重点是预测高危患者何时会出现 HAPI。 该研究收集了 485 名患者的实时诊断和危险因素,并使用递归特征消除(RFE)来选择最佳因素。 数据集分为训练集和测试集,并使用带有 RF 的网格搜索来预测 HAPI 时间。 与其他七种算法相比,所提出的模型取得了最佳性能,曲线下面积(AUC)为 91.20,几何平均值(G-mean)为 91.17。 预测 HAPI 时间的最主要的交互风险因素是*住院期间就诊 ICU*(~~应该是一个01变量~~)、Braden 分量表、BMI、刺激麻醉、患者拒绝改变体位以及其他实验室诊断。该研究强调了确定患者何时可能出现 HAPI 以进行早期干预和个性化护理计划的重要性。
+
+
+### Tech Detail
+
+Bradens Risk Assessment Scale是一种常用于评估患者患压力性溃疡风险的工具。它由6个因素组成,包括感觉知觉、潮湿程度、活动能力、流动的程度、营养状况和摩擦/剪切力。该评估工具通过对这些因素进行评分,以判断患者是否有发生压力性溃疡的风险。
+
+每个因素的评分在1到4之间,总分在6到23之间。较低的总分表示较高的风险,而较高的总分表示较低的风险。通过使用Bradens Risk Assessment Scale,医护人员可以在早期识别患者的风险,并采取相应的预防措施,降低患者患压力性溃疡的风险。
+
+
+
+## An Integrated System of Multifaceted Machine Learning Models to Predict If and When Hospital-Acquired Pressure Injuries (Bedsores) Occur
+
+### DOI
+
+10.3390/ijerph20010828
+
+### Date
+
+2023.01.01
+
+### Abstract
+
+文章提出了一项使用机器学习模型预测医院获得性压力损伤 (HAPI) 的研究。 作者强调了现有方法的不足之处,这些*方法仅预测患者是否会发生 HAPI,而不考虑发生的严重程度或时间*。 为了解决这个问题,该研究开发了一个多方面机器学习模型的集成系统。 在第一阶段,使用成本敏感支持向量机的遗传算法 (GA-CS-SVM) 来预测患者是否会出现 HAPI。 在第 2 阶段,采用 SVM 网格搜索 (GS-SVM) 来预测高危患者何时会发生 HAPI。 将所开发模型的性能与最先进的模型进行比较,获得了更好的性能表现。
+
+
+### Tech Detail
+
+* Dataset
+ * 通过EHR中的伤口、造口、失禁和护士笔记来验证HAPI患者
+ * 本研究的目的是根据多种风险因素预测患者发生 HAPI 的风险以及预计何时发生 HAPI。包括 Braden 风险评估子量表在内的 98 个风险因素被用作 ML 模型的输入;
+ * 同时,模型还要预测严重程度和时间,因此还有第二阶段的数据,具体分布:
+
+
+
+
+* Model Arch
+
+
+
+## Bedside Technologies to Enhance the Early Detection of Pressure Injuries
+
+### DOI
+
+10.1097/WON.0000000000000626
+
+### Date
+
+2020.03
+
+### Abstract
+
+该综述包括研究可用于检测压力相关性变白红斑 (PrBE)、压力相关性非变白性红斑 (PrNBE) 和深层组织压力性损伤 (DTPI) 的可用技术。 研究人员对多个数据库进行了系统搜索,并确定了 18 项符合条件的研究,这些研究代表了多种技术,包括**超声波、热成像、表皮下水分测量、反射光谱测定和激光多普勒**。
+
+其中表皮下水分测量为压力损伤的早期检测提供了最一致的结果。
+
+## Detection of Pressure Ulcers Using Electrical Impedance Tomography
+
+### DOI
+
+10.1109/I2MTC48687.2022.9806603
+
+### Date
+
+2022.05.16
+
+### Abstract
+
+本文讨论了使用电阻抗断层扫描 (Eletrical Impedance Tomography, EIT) 检测压疮。 压疮是慢性皮肤伤口,在早期阶段很难发现和监测。 EIT 是一种非侵入性成像技术,可可视化组织中生物电阻抗参数的分布。 该文件提出了一种基于 EIT 和灵活传感器阵列的压疮检测方法。 通过有限元仿真模型和物理实验来评估该方法的有效性。 主要发现表明,EIT 有潜力成为一种更安全、便携、低成本、实时的早期压疮监测系统。
+
+### Tech detail
+
+* EIT技术能够构建组织电特性分布图像并反映生物组织病变。
+
+## Electrically tunable two-dimensional heterojunctions for miniaturized near-infrared spectrometers
+
+### DOI
+
+10.1038/s41467-022-32306-z
+
+### Date
+
+2022.08.08
+
+### Abstract
+
+一种可以用于检测PI的技术,涉及到电化学, 不懂
+
+
+## High-quality semiconductor fibres via mechanical design
+
+### DOI
+
+10.1038/s41586-023-06946-0
+
+### Date
+
+2024-02-01
+
+### Abstract
+
+有关于半导体纤维的文章,应该可以用于PI检测中的运动学检测部分,也可以用于皮肤表面压力、阻抗等数据测量的传感器技术支持;暂且不看
+
+
+## Impedance sensing device enables early detection of pressure ulcers in vivo
+
+### DOI
+
+10.1038/ncomms7575
+### Date
+
+2015.03.17
+### Abstract
+
+本文讨论了一种灵活的电子设备的开发,该设备可以检测压力引起的组织损伤,即使损伤无法通过肉眼观察到。 该设备使用**柔性电极阵列上的阻抗谱来绘制组织阻抗并将其与组织健康相关联**。 研究人员在大鼠模型上进行了实验,发现阻抗测量值与多种动物和伤口类型的组织健康密切相关。 结果证明了使用该设备作为自动化、非侵入性“智能绷带”来早期检测压疮的可行性。
+
+## In-Advance Prediction of Pressure Ulcers via Deep-Learning-Based Robust Missing Value Imputation on Real-Time Intensive Care Variables
+
+### DOI
+
+10.3390/jcm13010036
+### Date
+
+2023.12.20
+### Abstract
+
+该文章讨论了用于实时预测压疮(PU)的临床决策支持系统的开发。 该研究利用机器学习 (ML) 和深度学习 (DL) 模型,利用 MIMIC-IV 和江原道国立大学医院 (KNUH) 数据集来预测 PU 的发生。 为了解决时间序列数据中缺失值的挑战,作者提出了一种名为 GRU-D++ 的新型循环神经网络模型。 该模型优于其他实验模型,实现了准时和提前 48 小时 PU 预测的高预测精度。 研究结果表明,GRU-D++模型可以显着减轻医护人员的工作量,并能够及时对ICU内的PU进行干预。
+
+### Tech Detail
+
+
+
+
+## Integrated System for Pressure Ulcers Monitoring and Prevention
+
+### ISBN
+
+978-3-031-26851-9 978-3-031-26852-6
+### Date
+
+2023
+
+### Abstract
+
+文章介绍了一个用于压疮监测和预防的集成系统。该系统包括压疮管理门户和移动应用程序,使护理人员能够管理有关患者压疮的临床信息,并为监测提供有用的信息。 该系统考虑了导致压疮的内在和外在因素,并包括数据采集、数据分析和为临床决策提供补充支持的组件。 该研究的主要发现包括开发一个系统,该系统可以提高患者护理质量和安全性,同时最大限度地减少医疗保健专业人员的倦怠。 该系统利用基于传感器的技术和机器学习算法为护理人员提供实时监控并生成警报和建议。
+
+
+
+## Machine Learning Techniques, Applications, and Potential Future Opportunities in Pressure Injuries (Bedsores) Management: A Systematic Review
+
+### DOI
+
+10.3390/ijerph20010796
+
+### Date
+
+2023.01.01
+
+### Abstract
+
+该综述的重点是机器学习 (ML) 在管理压力性损伤 (PI) 患者中的应用。 作者总结了 2007 年 1 月至 2022 年 7 月期间机器学习在 PI 管理中的贡献,根据医学专业对研究进行分类,分析差距并确定未来研究方向的机会。 该审查遵循 PRISMA 指南,包括 90 项符合条件的研究。 根据PI发生时间将文章分为三类:发生前、发生时、发生后。 每个类别又根据医学专业进一步细分为子领域,形成十六个领域。 概述和讨论了 PI 管理中最相关和潜在有用的应用和方法,例如深度学习技术和混合模型。 此次审查强调了现有风险评估工具与机器学习的集成,还讨论了 PI 的预防和后果,强调早期检测和个性化护理计划的重要性。 总体而言,该评论提供了对 PI 管理中机器学习应用现状的见解,并确定了未来研究的领域。
+
+### Tech Detail
+
+
+
diff --git a/content/research_career/papers_read/attachments/Dr._Ing. Jurgen Sachs(auth.) - Handbook of Ultra-Wideband Short-Range Sensing_ Theory, Sensors, Applications (2012).pdf b/content/research_career/papers_read/attachments/Dr._Ing. Jurgen Sachs(auth.) - Handbook of Ultra-Wideband Short-Range Sensing_ Theory, Sensors, Applications (2012).pdf
new file mode 100644
index 000000000..5d12958a3
Binary files /dev/null and b/content/research_career/papers_read/attachments/Dr._Ing. Jurgen Sachs(auth.) - Handbook of Ultra-Wideband Short-Range Sensing_ Theory, Sensors, Applications (2012).pdf differ
diff --git a/content/research_career/papers_read/papers_2023_10.md b/content/research_career/papers_read/papers_2023_10.md
new file mode 100644
index 000000000..2b1c18997
--- /dev/null
+++ b/content/research_career/papers_read/papers_2023_10.md
@@ -0,0 +1,54 @@
+---
+title: Papers Read in 2023.10
+tags:
+ - papers
+ - research-about
+---
+
+# Measurements of UWB through-the-wall propagation using spectrum analyzer and the Hilbert transform
+
+## DOI
+
+[https://doi.org/10.1002/mop.23107](https://doi.org/10.1002/mop.23107)
+## Abstract
+
+本文讲解了如何利用spectrum analyzer去测量UWB穿墙特性。本文的关键点在于利用Hilbert变换去**retrieve phase information**。这样就可以不使用昂贵的VNA设备去进行UWB穿墙特性的测量。
+
+## Key Point
+
+1. "**Channel measurements** for communication applications are **usually performed in the frequency domain** because of the *availability of the required instrumentations*, *the moderate cost*, and the *large associated dynamic range.*"
+
+2. "For **narrowband channel characterization**, the **phase data are less important** because the phase can be approximated as a linear phase component. For **UWB channels,** however, the **phase is a critical parameter** and the nature of its variations with frequency over an ultra-wide bandwidth can significantly impact the time-domain response."
+
+ * If the delay is not constant for different frequency components, the received signal will be distorted. - **群延时性**
+
+ * 本文就利用Hibert transform获得phase information
+
+4. Even for VNA, phase information is hard acquire.
+ * **Bad environment**, Long distance and wall obstructions
+ * Synchronization feedback cable should be **very low loss at the upper frequency edge**. - *明白了但仍未从逻辑上理解*
+ * Phase measurements are quite **sensitive**, in-situ measurements' error will skew result
+ * **Transmitter receiver crosstalk** will result in precursors in the impulse response
+
+
+
+
+# Time Domain Analysis Using a Network Analyzer
+
+
+## pdf
+
+[Time Domain Analysis Using a Network Analyzer.pdf](https://pinktalk.online/research_career/attachments/Time%20Domain%20Analysis%20Using%20a%20Network%20Analyzer.pdf)
+
+
+## Key point
+
+
+
+# A Time-Domain Measurement System for UWB Microwave Imaging
+
+
+## DOI
+
+[10.1109/TMTT.2018.2801862](https://doi.org/10.1109/TMTT.2018.2801862)
+
diff --git a/content/research_career/papers_read/papers_2023_11.md b/content/research_career/papers_read/papers_2023_11.md
new file mode 100644
index 000000000..75d8aa78b
--- /dev/null
+++ b/content/research_career/papers_read/papers_2023_11.md
@@ -0,0 +1,57 @@
+---
+title: Papers Read in 2023.11
+---
+
+# DESIGNING A UWB GENERATOR AND ANTENNA FOR CWD RADAR
+
+## Download link
+
+* [DESIGNING A UWB GENERATOR AND ANTENNA FOR CWD RADAR, the file is protected by password, contact me if you want to view](https://pinktalk.online/research_career/attachments/Eveleigh_Eric_A_2020July_MASc.pdf)
+
+## Abstract
+
+## Key points
+
+# Handbook of Ultra-Wideband Short-Range Sensing
+
+## Download link
+
+* [Sachs, Jürgen. _Handbook of Ultra‐Wideband Short‐Range Sensing: Theory, Sensors, Applications_. 1st ed., Wiley, 2012. _DOI.org (Crossref)_, https://doi.org/10.1002/9783527651818, preview version](https://download.e-bookshelf.de/download/0000/7534/49/L-G-0000753449-0002366653.pdf)
+* [Sachs, Jürgen. _Handbook of Ultra‐Wideband Short‐Range Sensing: Theory, Sensors, Applications_. 1st ed., Wiley, 2012. _DOI.org (Crossref)_, https://doi.org/10.1002/9783527651818, protected by password](https://pinktalk.online/research_career/papers_read/attachments/Dr._Ing.%20Jurgen%20Sachs(auth.)%20-%20Handbook%20of%20Ultra-Wideband%20Short-Range%20Sensing_%20Theory,%20Sensors,%20Applications%20(2012).pdf)
+## DOI
+
+https://doi.org/10.1002/9783527651818.
+
+## Abstract
+
+## Key points
+
+
+# Circularly Polarized Ultra-Wideband Radar System for Vital Signs Monitoring Kevin Khee-Meng Chan, Student Member, IEEE, Adrian Eng-Choon Tan, and Karumudi Rambabu
+
+## DOI
+
+**DOI:** [10.1109/TMTT.2013.2253328](https://doi.org/10.1109/TMTT.2013.2253328)
+
+## Abstract
+
+A UWB radar system design. The architecture is based on a time-expanded correlation architecture.
+## Key points
+
+
+
+
+# Efficient Passive Low-Rate Pulse Generator for Ultra-Wideband Radar.”
+
+## DOI
+
+ https://doi.org/10.1049/iet-map.2010.0030.
+
+## Abstract
+
+A novel method to produce short pulse at lower PRFs using cascaded sections of shunt-mode SRDs with decreasing carrier lifetimes.
+## Key points
+
+
+
+The pulse generator is based on SRDs.
\ No newline at end of file
diff --git a/content/research_career/plan/Next_work_plan/2023_12.md b/content/research_career/plan/Next_work_plan/2023_12.md
new file mode 100644
index 000000000..07795cb51
--- /dev/null
+++ b/content/research_career/plan/Next_work_plan/2023_12.md
@@ -0,0 +1,86 @@
+---
+title: Dec. 2023 Work Plan
+tags:
+ - plan
+---
+
+Task:
+
+- [ ] 电缆参数测量实验
+- [ ] 单独频段研究
+- [ ] 木板实验
+- [ ] 电缆参数学习
+- [ ] 了解无线电频段分布的原因
+- [ ] 学习天线馈电电路知识
+
+
+# 实验相关
+
+## 木板实验
+
+### Date
+
+11.30 - 12.4
+### Content
+
+按照原始实验设置,测量不同厚度木板的S11、S21数据分析材质对于反射和透射的影响
+
+## 电缆参数测量实验
+
+### Date
+
+电缆参数学习后
+### Content
+
+使用VNA测量手中的电缆的参数验证学习结果
+
+## 单独频段研究
+
+### Date
+
+12.1 - 12.5
+### Content
+
+使用测量得到的数据,通过合适的方法编程分析单独频段的反射性,穿透性等等
+
+# 学习内容相关
+
+## 电缆研究
+
+### Date
+
+11.29 - 12.4
+### Content
+
+* 了解如何评估电缆
+
+ * 参数
+
+* 使用 VNA 测试电缆并计算其性能参数
+
+* 评估电缆对我们实验会带来的影响
+
+## 了解无线电频段分布的原因
+
+### Date
+
+12.1 - 12.4
+
+### Content
+
+继续深入了解不同频段无线电的特性,并和我们的实验数据找出对应
+
+## 学习天线馈电电路知识
+
+### Date
+
+12.1 - 12.7
+
+### Content
+
+继续深入了解天线馈电电路设计的原理和带来的影响,衡量能否在我们实验的天线部分加入馈电电路
+
+
+# Scrum Board
+
+https://trello.com/b/QbNmVHvT/uwb-echo-signal-experiment-simulated-by-vna
\ No newline at end of file
diff --git a/content/resume.md b/content/resume.md
new file mode 100644
index 000000000..30035bea6
--- /dev/null
+++ b/content/resume.md
@@ -0,0 +1,118 @@
+---
+title: Resume
+tags:
+- resume
+- readme
+---
+
+
+
+
+
JudeWang
+
+
+
+# 📐 Education
+
+**Zhejiang University (ZJU)**, Zhejiang, China 2022.09 - Now
+*M.Sc.* Major in Biomedical Engineering (BME)
+
+**Exchange to National University of Singapore (NUS)** 2021.08-2022.05
+*Final Year Project* instructed by [Dan Wu](https://person.zju.edu.cn/en/danwu) and [Zhiwei Huang](https://cde.nus.edu.sg/bme/staff/dr-huang-zhiwei/)
+
+**Zhejiang University**, Zhejiang, China 2018.08-2022.06
+*B.S.* Major in Biomedical Engineering (BME), *The first Lv Weixue laboratory class in ZJU*
+*B.S.* Minor in *Intensive Training Program of Innovation and Entrepreneurship (ITP)*
+
+**Summer exchange to City University of HongKong (CityU)** Aug. 2019
+
+# 🔥 Projects & Research Experience
+
+**Master's thesis** 2022 - now
+*SAR image reconstruction to detect burn skin based on UWB echo signal*
+For now, my master's research direction is about echo signal processing. Ultra-wide band(UWB) signal has a good ability to across skin surface to get information under skin. We want use UWB signal's feature to detect skin burn level in non-invasive way. This project involves *back projection(BP)* algorithm to reconstruct 3D image from echo signal, image artifact removal based on amplitude coherence factor(ACF) and correlation weighted(CW), burn level evaluation method and so on.
+https://github.com/PinkR1ver/UWB-Imagination-Using-SAR
+
+**FYP** 2021-2022
+*Radiogenomics Analysis of Glioblastoma with Deep learning Techniques*
+I finish this FYP instructed by [Zhiwei Huang](https://cde.nus.edu.sg/bme/staff/dr-huang-zhiwei/) in NUS. This project contains three part. MRI image segmentation by *U-Net*, radiomics features extraction by *pyradiomics*, and feature vector classification by machine learning such as *random forest*, *MLP*, *SVM* and so on. [https://github.com/PinkR1ver/Radiogenemics--on-Ivy-Gap](https://github.com/PinkR1ver/Radiogenemics--on-Ivy-Gap)
+
+**SRTP** 2020-2021
+*3D tooth segmentation based on deep learning*
+This project targets at instance segmentation given images on teeth . We have implemented a semantic segmentation to detect teeth and gingiva using PointNet, and then utilize bounding boxes to do instance segmentation on every single teeth using 3D-BoNet.
+
+**Smooth Boy** Aug. 2020
+*A skin evaluation and product recommendation WeChat app*
+Smooth Boy is a WeChat app that can evaluate a person’s skin quality and recommend skin care products and it was especially developed for young male teenage who purchase for beauty. It was a real self-learning project as well. In this project I design the UI and code the core part of the app to achieve the function of detecting the person’s face. [https://github.com/PinkR1ver/Smooth-Boy](https://github.com/PinkR1ver/Smooth-Boy)
+
+
+**Sketchpad** Apr – Jun. 2019
+*Polynomial function visualization in 1995 style*
+It was my first class project in college. It was a simple program based on an old graphics library which created in 1995, can draw image of polynomial function. In this project, I built the whole structure of the code and organize my team to code different part to make it done. It was really exciting to stay up all night to code and when it was finished, I almost cried that time. [https://github.com/PinkR1ver/SketchPad](https://github.com/PinkR1ver/SketchPad)
+
+
+# 🤹🏽Skills & Knowledge
+
+## Proficient
+
+
+
+
+
+
+
+
+
+
+## Detail
+
+* Program Language: Python >= MATLAB >> C == HTML/CSS/JavaScript
+* Deep Learning Associated:
+ * Proficient in PyTorch deep learning frameworks
+ * Familiar with the common techniques and algorithms of deep neural network,.
+ * Familiar with the common CV tasks and NLP tasks.
+ * Familiar with some famous backbone of DL model - U-Net, Vit...
+ * Learning LLM knowledge recently
+* Core lessons:
+ * NUS - EE4305 - Fuzzy/Neural System for Intelligent Robotics - Grade: A
+ * NUS - EE4309 - Robot Perception - Grade: A-
+ * ZJU - Signal and System - Grade: A-
+ * ZJU - Modern Medical Imaging Technology - Grade: A
+ * ZJU - Data Structure - Grade: B+
+ * ...
+* Toolkit: Git, VS code
+
+## Others
+
+* Jekyll, RStudio and some other tools to build personal blog: [https://pinkr1ver.com](https://pinkr1ver.com) (🚧 obsolete...)
+* HTML+CSS+JS to create my photo slide show web - [https://pinkr1ver.com/PhotoGallery/](https://pinkr1ver.com/PhotoGallery/)
+* SHAP analysis for model interpretability https://github.com/PinkR1ver/SHAP_Tutorial
+* $\LaTeX$ for my FYP thesis, contributed to 1.9k star repository [zjuthesis](https://github.com/TheNetAdmin/zjuthesis)
+
+# 🏆 Honors
+
+* Excellent Graduate of Zhejiang University
+* Third Class Scholarship of Zhejiang University
+
+# 🎈 Clubs & Social Activities
+
+* Support Education in Jiande Town, Changsha, Hunan 2019.7-2019.8
+* DanYang & QingXi Community Student Union New Media Department deputy director 2018-2020
+
+# 🌺 Other Fun Facts
+
+* Outdoor fans - cycling, hiking... - [My Strava Profile](https://www.strava.com/athletes/109116948)
+* Photography fans - [My Photo Gallery](https://pinkr1ver.notion.site/3cfdd332b9a94b20bca041f2aa2bdcd2?v=24e696e6ab754386a710bc8e83976357)
+* Loving films, dramas, books... - [My Watching List](https://pinkr1ver.notion.site/5e136466f3664ff1aaaa75b85446e5b4?v=a41efbce52a84f7aa89d8f649f4620f6)
+* PC Game fans, especially CS - [My Steam profile](https://steamcommunity.com/id/PinkCred1t)
+* Chess fans - [Rank in chess.com](https://www.chess.com/member/yichongwang)
+
+# 📟 Contacts
+
+* 🏢: Dept. Biomedical Engineering Lab 511 | Zhejiang University
+* ☎: +86 177-6826-6860
+* 📬: pinkr1veroops@gmail.com
+
diff --git a/content/signal_processing/PSD_estimation/PSD_estimation.md b/content/signal_processing/PSD_estimation/PSD_estimation.md
new file mode 100644
index 000000000..69d6d357e
--- /dev/null
+++ b/content/signal_processing/PSD_estimation/PSD_estimation.md
@@ -0,0 +1,7 @@
+---
+title: Power spectral density estimation
+tags:
+ - signal-processing
+ - statistics
+---
+[Power spectral density estimation](signal_processing/basic_knowledge/concept/Spectral_density.md)(PSDE, or SDE),功率谱估计是随机信号处理的重要研究内容之一
\ No newline at end of file
diff --git a/content/signal_processing/UWB_about/UWB_signal_parameters.md b/content/signal_processing/UWB_about/UWB_signal_parameters.md
new file mode 100644
index 000000000..5c9b24873
--- /dev/null
+++ b/content/signal_processing/UWB_about/UWB_signal_parameters.md
@@ -0,0 +1,7 @@
+---
+title: UWB signals, Their Descriptions and Parameters
+tags:
+ - signal
+ - signal-processing
+ - UWB
+---
diff --git a/content/signal_processing/attachments/Pasted image 20230919152200.png b/content/signal_processing/attachments/Pasted image 20230919152200.png
new file mode 100644
index 000000000..770c934cf
Binary files /dev/null and b/content/signal_processing/attachments/Pasted image 20230919152200.png differ
diff --git a/content/signal_processing/attachments/Pasted image 20230919152234.png b/content/signal_processing/attachments/Pasted image 20230919152234.png
new file mode 100644
index 000000000..ad45b743f
Binary files /dev/null and b/content/signal_processing/attachments/Pasted image 20230919152234.png differ
diff --git a/content/signal_processing/attachments/Pasted image 20230919152357.png b/content/signal_processing/attachments/Pasted image 20230919152357.png
new file mode 100644
index 000000000..5538ec922
Binary files /dev/null and b/content/signal_processing/attachments/Pasted image 20230919152357.png differ
diff --git a/content/signal_processing/attachments/Pasted image 20230919152720.png b/content/signal_processing/attachments/Pasted image 20230919152720.png
new file mode 100644
index 000000000..1b8a260dc
Binary files /dev/null and b/content/signal_processing/attachments/Pasted image 20230919152720.png differ
diff --git a/content/signal_processing/attachments/Pasted image 20230919153109.png b/content/signal_processing/attachments/Pasted image 20230919153109.png
new file mode 100644
index 000000000..966e91d01
Binary files /dev/null and b/content/signal_processing/attachments/Pasted image 20230919153109.png differ
diff --git a/content/signal_processing/attachments/Pasted image 20230919153401.png b/content/signal_processing/attachments/Pasted image 20230919153401.png
new file mode 100644
index 000000000..f0e52e5f2
Binary files /dev/null and b/content/signal_processing/attachments/Pasted image 20230919153401.png differ
diff --git a/content/signal_processing/attachments/Screenshot_from_2022-10-18_10-53-17.png b/content/signal_processing/attachments/Screenshot_from_2022-10-18_10-53-17.png
new file mode 100644
index 000000000..4dbd085fd
Binary files /dev/null and b/content/signal_processing/attachments/Screenshot_from_2022-10-18_10-53-17.png differ
diff --git a/content/signal_processing/attachments/fourier_pairs.pdf b/content/signal_processing/attachments/fourier_pairs.pdf
new file mode 100644
index 000000000..591cbb578
Binary files /dev/null and b/content/signal_processing/attachments/fourier_pairs.pdf differ
diff --git a/content/signal_processing/basic_knowledge/FT/fourier_transform.md b/content/signal_processing/basic_knowledge/FT/fourier_transform.md
new file mode 100644
index 000000000..1d5ec25ff
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/FT/fourier_transform.md
@@ -0,0 +1,237 @@
+---
+title: Fourier Transform
+tags:
+ - math
+ - signal-processing
+---
+# Almost Fourier Transform
+
+
+
+It is important to see there are 2 different frequencies here:
+1. The frequency of the original signal
+2. The frequency with which the **little rotating vector winds around the circle**
+
+
+
+Different patterns appear as we wind up this graph, but it is clear that the x-coordinate for the center of mass is important when the winding frequency is 3; The same number as the original signal
+
+这个发现就是Fourier transform的基础
+
+而如何将一维信息拉到平面中,很容易想到设计complex plane,如何describe rotating at a rate of $f$, 用:
+
+$$
+e^{2\pi i ft}
+$$
+
+因为在Fourier transform中,convention way是顺时针旋转,所以使用$e^{-2\pi ift}$,那如何衡量center of mass呢,如下图:
+
+
+
+
+
+$$
+\frac{1}{N}\sum_{k=1}^N g(t_k)e^{-2\pi i ft_k}
+$$
+
+然后more points → continuous:
+
+$$
+\frac{1}{t_2-t_1}\int_{t_1}^{t_2}g(t)e^{-2\pi ift} dt
+$$
+
+这个就是Almost Fourier Transform, 但是实际情况上,Fourier transform倾向于得到scaled center mass,越长的time,旋转越多圈,其Fourier transform也会成倍放大
+
+
+
+
+# Fourier Transform (FT)
+
+$$
+\hat{g}(f)=\int_{t_1}^{t_2}g(t)e^{-2\pi ift}dt
+$$
+
+一般来说,Fourier transform的bounds在$-\infty \rightarrow \infty$
+
+$$
+\hat{g}(f)=\int_{t_1}^{t_2}g(t)e^{-2\pi ift}dt
+$$
+
+**Inverse Fourier Transform**
+
+$$
+g(t)=\int_{-\infty}^{\infty}\hat{g}(f)e^{2\pi ift}df
+$$
+
+# **Discrete-time Fourier Transform(DTFT)**
+
+$$
+x[n]\xrightleftharpoons[\text{IDTFT}]{\text{DTFT}} X(\omega)\ or\ X(e^{j\omega})
+$$
+
+$$
+X(\omega)=\sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n}
+$$
+
+
+> [!hint]
+> Z transform:
+$X(z)=\sum_{n=-\infty}^{\infty}x(n)z^{-n}$
+
+After DTFT, the signal $X(\omega)$ will have period $2\pi$
+
+$$
+X(\omega+2\pi)=\sum_{n=-\infty}^\infty x[n]e^{-j(\omega+2\pi)n}=\sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n}=X(\omega)
+$$
+
+IDTFT:
+
+$$
+x(n)=\frac{1}{2\pi}\int_{-\pi}^\pi X(\omega)e^{j\omega n}d\omega
+$$
+
+Also, for $X(\omega)$, it have **polar form and rectangular form**
+
+- Polar form:
+
+$$
+X(\omega)=|X(\omega)|\angle X(\omega)
+$$
+
+- Rectangular form:
+
+$$
+X(\omega)=X_r(\omega)+jX_i(\omega)
+$$
+
+so, **magnitude and angle**
+
+$$
+|X(\omega)|=\sqrt{{X_r(\omega)}^2 + {X_i(\omega)}^2} \\
+\angle X(\omega)=\tan^{-1}[\frac{X_i(\omega)}{X_r(\omega)}]
+$$
+
+# Complex Fouerier Series
+
+为了解决热方程和弦振动,因此有了傅里叶级数;
+
+## 复数形式推导
+
+
+
+
+## 三角函数推导
+
+见这个知乎:
+
+[傅里叶系列(一)傅里叶级数的推导](https://zhuanlan.zhihu.com/p/41455378)
+
+$$
+f(t)=\frac{1}{2}a_0 + \sum_{k=1}^\infty (a_k\cos 2\pi kt + b_k\sin 2\pi kt)
+$$
+
+
+# Discrete Fourier Transform(DFT)
+
+$$
+\{f_1,f_2,f_3,\cdots,f_n\}\xRightarrow{\text{DFT}}\{{\hat{f_1},\hat{f_2},\hat{f_3},\cdots,\hat{f_n}}\}
+$$
+
+$$
+X[k]=\sum_{n=0}^{N-1}x[n]\cdot e^{-\frac{j2\pi kn}{N}}
+$$
+
+$$
+\frac{k}{N}\hat{=} F, \quad n\hat{=}t
+$$
+
+Video: [Discrete Fourier Transform - Simple Step by Step](https://www.youtube.com/watch?v=mkGsMWi_j4Q)
+
+Also, when we do DFT, we need to notice **Nyquist Limit**
+
+Also,we can write DFT in matrix version:
+
+$$
+\text{make } \omega_N=e^{\frac{-2\pi i}{N}}
+$$
+
+it have:
+
+$$
+\begin{bmatrix}
+X[0] \\
+X[1] \\
+X[2] \\
+\vdots \\
+X[N-1] \\
+\end{bmatrix} =
+\begin{bmatrix}
+1 & 1 & 1 & \cdots & 1 \\
+1 & \omega_N & \omega_N^2 & \cdots & \omega_N^{N-1} \\
+1 & \omega_N^2 & \omega_N^4 & \cdots & \omega_N^{2(N-1)} \\
+\vdots & \vdots & \vdots & \ddots & \vdots \\
+1 & \omega_N^{N-1} & \omega_N^{2(N-1)} & \cdots & \omega_N^{(N-1)^2} \\
+\end{bmatrix}
+\begin{bmatrix}
+x[0] \\
+x[1] \\
+x[2] \\
+\vdots \\
+x[N-1] \\
+\end{bmatrix}
+$$
+
+**For $X[k]$, it means a $\cos$ wine like this:**
+
+
+
+# Fast Fourier transform(FFT)
+
+**FFT is a computationally efficient way of computing the DFT**
+
+The time complexity of FFT is $O(n\log{n})$, and the time complexity of DFT is $O(n^2)$
+
+for DFT:
+
+$$
+\hat{f}=\underbrace{F_{1024}}_{\text{DFT Matrix}} \cdot f
+$$
+
+# Z transform
+
+The Z-transform (ZT) is a mathematical tool which is used to convert the **difference equations** in **time domain** into the **algebraic equations** in **z-domain**.
+
+通常,Z变换有两种类型,**unilateral (or one-sided)** and **bilateral (or two-sided)**
+
+**bilateral:**
+
+$$
+Z[x(n)]=X(z)=\sum_{n=-\infty}^\infty x(n)z^{-n}
+$$
+
+**unilateral:**
+
+$$
+Z[x(n)]=X(z)=\sum_{n=0}^{\infty} x(n)z^{-n}
+$$
+
+where, z is a complex variable and it is given by:
+
+$$
+z=re^{j \omega}
+$$
+
+The unilateral or one-sided z-transform is very useful because we mostly deal with **causal sequences**. Also, it is mainly suited for **solving difference equations with initial conditions**.
+
+
+# Fourier Pairs
+
+[fourier_pairs.pdf](https://pinktalk.online/signal_processing/attachments/fourier_pairs.pdf)
+# Reference
+
+* [But what is the Fourier Transform? A visual introduction.](https://www.youtube.com/watch?v=spUNpyF58BY&t=614s)
+* [But what is a Fourier series? From heat flow to drawing with circles | DE4](https://www.youtube.com/watch?v=r6sGWTCMz2k&t=531s)
+* [傅里叶系列(一)傅里叶级数的推导](https://zhuanlan.zhihu.com/p/41455378)
+* [The Discrete Fourier Transform (DFT)](https://www.youtube.com/watch?v=nl9TZanwbBk)
+* [The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?](https://www.youtube.com/watch?v=h7apO7q16V0)
+* [Euler’s formula](https://www.notion.so/Euler-s-formula-d8e4462d5cda4e09a4ca4fcda7cd1392?pvs=21)
\ No newline at end of file
diff --git a/content/signal_processing/basic_knowledge/FT/fourier_transform_pairs_derivation.md b/content/signal_processing/basic_knowledge/FT/fourier_transform_pairs_derivation.md
new file mode 100644
index 000000000..6ea4f0011
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/FT/fourier_transform_pairs_derivation.md
@@ -0,0 +1,22 @@
+---
+title: Fourier transform pairs and properties derivation
+tags:
+ - signal-processing
+ - signal
+ - fourier-transform
+ - math
+---
+# Fourier Transform Paris
+
+## $1 \leftrightarrow 2\pi\delta(\omega)$
+
+
+$$
+\begin{equation}
+\begin{split}
+ X(\omega) & = \int_{-\infty}^{\infty} 1 * e^{-j\omega t} dt \\
+ & = \lim_{T\rightarrow\infty}\int_{-\frac{T}{2}}^{\frac{T}{2}} e^{-j\omega t} dt \\
+ & = \lim_{T\rightarrow\infty} -\frac{1}{j\omega} [e^{-j\omega t}] |_{t=-T/2}^{t=T/2}
+\end{split}
+\end{equation}
+$$
diff --git a/content/signal_processing/basic_knowledge/attachments/Pasted image 20240115112204.png b/content/signal_processing/basic_knowledge/attachments/Pasted image 20240115112204.png
new file mode 100644
index 000000000..67de97670
Binary files /dev/null and b/content/signal_processing/basic_knowledge/attachments/Pasted image 20240115112204.png differ
diff --git a/content/signal_processing/basic_knowledge/concept/FBW.md b/content/signal_processing/basic_knowledge/concept/FBW.md
new file mode 100644
index 000000000..9e2a63a44
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/FBW.md
@@ -0,0 +1,9 @@
+---
+title: FBW - Fractional Bandwidth
+tags:
+ - gauss-pulse
+ - basic
+---
+# Reference
+
+* [https://www.telecomtrainer.com/fbw-fractional-bandwidth/#:~:text=In%20simple%20terms%2C%20FBW%20is,has%20a%20low%20fractional%20bandwidth.](https://www.telecomtrainer.com/fbw-fractional-bandwidth/#:~:text=In%20simple%20terms%2C%20FBW%20is,has%20a%20low%20fractional%20bandwidth.)
diff --git a/content/signal_processing/basic_knowledge/concept/FM_vs_AM.md b/content/signal_processing/basic_knowledge/concept/FM_vs_AM.md
new file mode 100644
index 000000000..0f656a541
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/FM_vs_AM.md
@@ -0,0 +1,9 @@
+---
+title: FM signal vs. AM signal - Learning Signal Transmitting
+tags:
+ - signal
+ - signal-processing
+---
+# Reference
+
+* [FM和AM是啥意思?收音机是咋收到音乐的?李永乐老师讲广播信号传输. www.youtube.com_ https://www.youtube.com/watch?v=ckAflJSt5-4. Accessed 8 Oct. 2023.](https://www.youtube.com/watch?v=ckAflJSt5-4)
\ No newline at end of file
diff --git a/content/signal_processing/basic_knowledge/concept/SWR.md b/content/signal_processing/basic_knowledge/concept/SWR.md
new file mode 100644
index 000000000..f8207ad0a
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/SWR.md
@@ -0,0 +1,23 @@
+---
+title: Standing Wave Ratio
+tags:
+ - signal-processing
+ - VNA
+ - antenna
+---
+天线的驻波比(Standing Wave Ratio,SWR)是用于**衡量天线与传输线或电路之间的匹配性能的一个关键参数**。SWR是一个无单位的值,通常表示为正数,它用于表示天线的输入阻抗与传输线或电路的特性阻抗之间的比率。SWR的值越低,表示天线与传输线的匹配越好,能更有效地传输能量。
+
+以下是关于SWR的一些重要概念:
+
+1. **SWR的定义**:
+
+2. **理想匹配**:在理想情况下,当SWR等于1时,表示天线与传输线或电路完美匹配,没有反射。这是最佳的匹配情况,所有的能量都被传输到天线,而没有反射回来。
+
+3. **SWR的应用**:SWR是用于评估天线系统性能的重要参数。较高的SWR值表示较差的匹配,可能会导致能量反射和损失。天线的设计和调整通常涉及到降低SWR以实现更好的匹配。
+
+4. **测量和仪器**:SWR可以通过使用SWR仪器或矢量网络分析仪(VNA)等仪器来测量。这些仪器可以提供关于天线系统性能的详细信息,包括SWR曲线。
+
+5. **天线选择**:在选择天线时,工程师通常会考虑SWR作为一个重要因素。根据应用需求和特定频率范围,选择具有适当SWR的天线,以确保最佳性能。
+
+
+总之,天线的驻波比(SWR)是用于衡量天线与传输线或电路之间匹配性能的一个关键参数。它帮助工程师评估能量传输和反射,并选择适当的天线以满足特定的通信或射频应用需求。
\ No newline at end of file
diff --git a/content/signal_processing/basic_knowledge/concept/Spectral_density.md b/content/signal_processing/basic_knowledge/concept/Spectral_density.md
new file mode 100644
index 000000000..002eaa216
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/Spectral_density.md
@@ -0,0 +1,6 @@
+---
+title: Spectral Density
+tags:
+ - signal
+ - basic
+---
diff --git a/content/signal_processing/basic_knowledge/concept/scattering_parameters.md b/content/signal_processing/basic_knowledge/concept/scattering_parameters.md
new file mode 100644
index 000000000..f819df1dd
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/scattering_parameters.md
@@ -0,0 +1,19 @@
+---
+title: Scattering Parameters
+tags:
+ - signal-processing
+ - VNA
+ - devices
+---
+S参数(Scattering Parameters,散射参数)是用于描述射频和微波电路中信号传输和散射特性的一组参数。S参数响应是指S参数随频率变化的响应或值。这些参数用于描述电路中信号的传输、反射和散射情况,通常以矩阵的形式表示。S参数通常有四个值:S11、S12、S21和S22,分别表示反射系数和传输系数。
+
+以下是S参数的基本定义和解释:
+
+1. **S11**:S11是入射信号反射回同一端口的参数。它描述了信号从电路中的一个端口反射回来的程度。S11的值通常介于0和1之间,表示反射功率与入射功率之比。当S11等于0时,表示没有反射,而当S11等于1时,表示完全反射。在VNA中,S11是反射系数,一般简称为**REFL**
+
+2. **S12**:S12是从一个端口到另一个端口的传输系数。它描述了信号从一个端口传输到另一个端口的程度。S12的值通常表示为复数,包括幅度和相位信息。
+
+3. **S21**:S21是从一个端口到另一个端口的传输系数,通常是最常用的参数。它描述了信号从一个端口传输到另一个端口的程度。与S12类似,S21的值也包括幅度和相位信息。
+
+4. **S22**:S22是从同一端口反射回同一端口的参数,类似于S11,但描述了不同的电路情况。S22的值也表示反射功率与入射功率之比。
+
diff --git a/content/signal_processing/basic_knowledge/concept/smith_graph.md b/content/signal_processing/basic_knowledge/concept/smith_graph.md
new file mode 100644
index 000000000..c15c79ba4
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/smith_graph.md
@@ -0,0 +1,24 @@
+---
+title: Smith Graph
+tags:
+ - signal-processing
+ - VNA
+---
+Smith图,又称为Smith圆图或史密斯图,是一种常用于射频(RF)和微波工程中的图形工具,用于可视化和分析电路、天线和传输线的阻抗匹配和传输线特性的工具。Smith图以19世纪初美国工程师Philip H. Smith的名字命名,他对电路匹配问题作出了重要贡献。
+
+Smith图的主要特点和用途包括:
+
+1. **圆形图表**:Smith图通常表示为一个圆形图表,其中圆的边界代表纯电阻(实部)和纯电抗(虚部)的所有可能阻抗值。圆心表示阻抗为零的点。
+
+2. **阻抗线和反射系数圆圈**:Smith图上有一系列称为阻抗线的曲线,它们表示具有特定阻抗值的点。此外,Smith图还包括一系列反射系数圆圈,它们表示具有相同反射系数幅度的点。这些线和圆圈有助于可视化不同阻抗和反射系数的值。
+
+3. **标准化阻抗**:Smith图通常使用标准化阻抗来表示电路或传输线的阻抗。标准化阻抗是实际阻抗除以特定的参考阻抗值,通常是特定的传输线特性阻抗,如50欧姆或75欧姆。
+
+4. **阻抗匹配**:Smith图非常有用于分析电路的阻抗匹配问题。通过将电路的阻抗表示在Smith图上,工程师可以找到阻抗匹配网络的解决方案,以确保信号的最大功率传输。
+
+5. **传输线分析**:Smith图也用于分析传输线(如同轴电缆)的特性。它可以帮助工程师确定传输线的特性阻抗,波长和阻抗转换等。
+
+6. **频率扫描**:Smith图经常与矢量网络分析仪(VNA)一起使用,以测量和分析电路在不同频率下的反射系数和阻抗。
+
+
+总之,Smith图是一个强大的工具,用于可视化和分析射频和微波电路中的阻抗匹配和传输线特性。通过在Smith图上绘制电路的阻抗,工程师可以更容易地理解和解决复杂的电路匹配问题,并优化电路性能。
\ No newline at end of file
diff --git a/content/signal_processing/basic_knowledge/concept/what_is_dB.md b/content/signal_processing/basic_knowledge/concept/what_is_dB.md
new file mode 100644
index 000000000..a3327d522
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/concept/what_is_dB.md
@@ -0,0 +1,28 @@
+---
+title: What is dB
+tags:
+- signal-processing
+- basic
+---
+dB is short for decibel, which is a unit that indicates ratio or gain. It is often used to measure *sound intensity*, *signal strength*, *attenuation* and other quantities.
+
+For example, if a sound has a power of 10 W and another sound has a power of 1 W, then the difference in decibels is 10 dB = 10 log (10/1) = 10 log 10 = 10.
+
+**Signal Noise Ratio** is also measured by dB
+
+## Signal Noise Ratio
+$$
+{SNR}_{power}=\frac{\text{Average Signal Power}}{\text{Average Noise Power}}
+$$
+
+$$
+{SNR}_{voltage}=\frac{\text{RMS Signal Voltage}}{\text{RMS Noise Voltage}}
+$$
+
+$$
+{SNR}_{power}={{SNR}_{voltage}}^2
+$$
+
+$$
+{SNR}_{dB}=10\log_{10}{{SNR}_{power}}=20\log_{10}{{SNR}_{voltage}}
+$$
diff --git a/content/signal_processing/basic_knowledge/random_signal_basic.md b/content/signal_processing/basic_knowledge/random_signal_basic.md
new file mode 100644
index 000000000..68fd87eae
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/random_signal_basic.md
@@ -0,0 +1,81 @@
+---
+title: Random Signal Basic
+tags:
+ - signal-processing
+ - math
+---
+
+# What is Random Signals
+
+- 随机信号(Random Signals)在任何时间的取值都是不能先验确定的随机变量
+
+- 虽然随机信号的取值不能先验确定,但这些取值却服从某种统计规律,换言之,随机信号或过程可以用概率分布特性统计地描述
+
+- 随机变量 $X=x(t)$,离散状态为随机序列 $x(n)$,$x_k(n)$是随机序列$x(n)$的一个样本序列
+
+# 统计量
+
+$$
+\mu_x(n)=E\{x(n)\}=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{k=1}^N x_k(n)
+$$
+
+$$
+E\{x^2(n)\}=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{k=1}^N x^2_k(n)
+$$
+
+
+$$
+\sigma^2_x(n)=E\{[x(n)-\mu_x(n)]^2\}=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{k=1}^N[x_k(n)-\mu_x(n)]^2
+$$
+
+
+$$
+\sigma^2_x(n)=E\{x^2(n)\}-\mu^2_x(n)
+$$
+
+$$
+R_x(n_1,n_2)=E\{x(n_1)x(n_2)\}=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{k=1}^N x_k(n_1)x_k(n_2)
+$$
+
+
+## [Wide Sense Stationary Process 宽平稳过程](data_sci/stochastic_process/stationary_process.md)
+
+**平稳随机信号**——其统计特性与时间无关。
+
+1. $\mu_x(n)=\mu_x$ - 均值与n无关,即于哪次观察信号无关
+
+2. $R_x(n,n+m)=R_x(m)$ - 自相关函数与时间n无关,之与时移m有关
+
+> [!tip]
+> 💡 在实际工作中,我们往往把所研究的随机信号视为平稳的,这可使问题大大简化。实际上,自然界中的绝大部分随机信号在一定条件、一定范围内可以认为是平稳的。
+
+**非平稳随机信号**——其统计特性与时间有关,使用Wigner-Ville分布分析
+
+## Ergodicity 各态历经
+
+
+对于平稳随机信号,虽然它的统计特性与**时间无关**,但在计算各特征时采用的是**集合平均**,就需要$x_k(n)$的无穷多个样本,即$k=1,2,\cdots,\infty$
+
+这在实际工作中显然是不现实的,实际上我们只能得到若干个样本函数,有些情况下甚至只能得到一个,比如地震波;
+
+那能否用**一次试验记录(或一个样本函数)来计算均值、自相关函数等这些统计特征?**
+
+如果一平稳随机信号$x(n)$在**集合平均**意义上的均值和自相关函数与单一样本函数在**时间平均**意义上的均值和自相关函数相同,则称$x(n)$为**各态历经**信号(**Ergodicity**)。
+
+
+> [!tip]
+> Watch this video: [_What Is Ergodicity? - Alex Adamou_. _www.youtube.com_, https://www.youtube.com/watch?v=VCb2AMN87cg. Accessed 19 Sept. 2023.](https://www.youtube.com/watch?v=VCb2AMN87cg)
+
+
+对于拥有Ergodicity的信号,可以用时间平均代替集合平均,即
+
+
+
+
+$$
+\mu_x=E\{x(n)\}=\lim_{M\rightarrow\infty}\frac{1}{2M+1}\sum_{n=-M}^Mx(n)
+$$
+
+$$
+R_x(m)=E\{x(n)x(n+m)\}=\lim_{M\rightarrow\infty}\frac{1}{2M+1}\sum_{n=-M}^M x(n)x(n+m)
+$$
\ No newline at end of file
diff --git a/content/signal_processing/basic_knowledge/stability_of_discrete_system.md b/content/signal_processing/basic_knowledge/stability_of_discrete_system.md
new file mode 100644
index 000000000..ebe47ca2c
--- /dev/null
+++ b/content/signal_processing/basic_knowledge/stability_of_discrete_system.md
@@ -0,0 +1,55 @@
+---
+title: Stability of Discrete System
+tags:
+ - signal-processing
+ - basic
+ - system
+---
+# 离散系统稳定性判别(因果系统)
+
+## 时域充要条件
+
+$$
+\sum_{k=-\infty}^{\infty} |h(k)| < \infty
+$$
+
+**绝对可和**
+
+
+## Z域充要条件
+
+**$H(z)$收敛域包含单位圆** $\leftrightarrow$ $|P_j| < 1$ **极点都在单位圆内**
+
+
+> [!tip]
+> Z域的充要条件表明了在时域上,随着时间,时域信号是衰减的,因此绝对可和
+
+
+# 收敛域 ROC, Region of convergence
+
+
+$$
+ROC = \{z:|\sum_{n=-\infty}^{\infty}x[n]z^{-n}| < \infty\}
+$$
+
+ROC是指Z变换的求和收敛的复平面上的点集。
+
+## 因果的收敛域
+
+### Example
+
+$x[n]={0.5}^n\mu[n]$, 则
+
+$$
+\mathcal{Z}\{x[n]\} = \sum_{n=-\infty}^{\infty}x[n]z^{-n}=\sum_{n=0}^{\infty} (\frac{0.5}{z})^n = \frac{1}{1-0.5z^{-1}}
+$$
+
+最后一个等式来自无穷几何级数,而等式仅在 $|0.5z^{−1}| < 1$ 时成立,可以以 z 为变量写成 $|z| > 0.5$。因此,收敛域为 $|z| > 0.5$。在这种情况下,收敛域为复平面“挖掉”原点为中心的半径为 0.5 的圆盘。
+
+
+
+
+
+# Reference
+
+[_VK2.14-离散系统稳定性判据.Mp4 - Vk2.14-Discrete System Stability Is Judged .Mp4_. _www.youtube.com_, https://www.youtube.com/watch?v=1yM_Szmprtc. Accessed 15 Jan. 2024.](https://www.youtube.com/watch?v=1yM_Szmprtc)
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/SRD.md b/content/signal_processing/device_and_components/SRD.md
new file mode 100644
index 000000000..b5ad4b151
--- /dev/null
+++ b/content/signal_processing/device_and_components/SRD.md
@@ -0,0 +1,30 @@
+---
+title: Step Recovery Diode
+tags:
+ - signal
+ - ciruit-componets
+ - UWB
+---
+
+阶跃恢复二极管(Step Recovery Diode,SRD)是一种特殊类型的二极管,通常用于高频、脉冲和微波应用中。它的特殊结构和工作原理使其能够产生非常快速的电压和电流变化,因此在信号产生、调制和混频等电子电路中有广泛的应用。以下是关于SRD的详细介绍:
+
+1. 结构和工作原理:
+ - SRD的基本结构与普通二极管相似,通常是硅或碳化硅材料制成。它有两个端口,分别是阳极(Anode)和阴极(Cathode),并在中间有一个PN结。
+ - SRD的工作原理涉及到快速反向恢复特性。当正向偏置施加在PN结上时,它处于导通状态,就像普通二极管一样。但是,当反向偏置施加在SRD上时,PN结中的载流子需要一定时间才能从导通状态切换到截止状态。在这个切换过程中,SRD可以产生非常快速的电流和电压变化,因此它在高频脉冲应用中非常有用。
+
+2. 工作模式:
+ - SRD通常在开关模式下工作,特别是在高频和脉冲应用中。在这种模式下,SRD在正向偏置下导通,然后突然被反向偏置以切换到截止状态,这导致了极快的电流和电压变化。
+
+3. 应用领域:
+ - 脉冲产生器:SRD可以用于生成极短脉冲信号,通常用于雷达、通信、粒子加速器和其他需要高峰值功率的应用。
+ - 调制器:在调制电路中,SRD可以用于产生调制信号,例如频率合成和干扰源中的频率扩展。
+ - 混频器:SRD可用于实现频率混频,将两个信号相乘以生成新的频率分量。
+ - 脉冲测量:SRD在测量快速脉冲信号的过程中很有用,例如超短脉冲的时间测量。
+
+4. 优点和局限性:
+ - 优点:SRD具有非常快速的切换速度和高频率响应,使其在高频和脉冲应用中非常有用。
+ - 局限性:SRD的工作受限于其特殊的反向恢复特性,因此不适用于所有应用。此外,它们通常需要精确的驱动电路,以确保它们以正确的方式工作。
+
+总之,阶跃恢复二极管(SRD)是一种特殊的二极管,它在高频、脉冲和微波应用中具有广泛的应用,因为它可以产生非常快速的电流和电压变化,适用于各种电子电路中的特殊应用。
+
+
diff --git a/content/signal_processing/device_and_components/VNA_learn.md b/content/signal_processing/device_and_components/VNA_learn.md
new file mode 100644
index 000000000..984a29e60
--- /dev/null
+++ b/content/signal_processing/device_and_components/VNA_learn.md
@@ -0,0 +1,76 @@
+---
+title: Learn VNA in practical way
+tags:
+ - devices
+ - signal-processing
+---
+
+# Background
+
+About what is VNA: [VNA Research](research_career/UWB_about/report/VNA_research.md)
+
+# Step by Step Learn VNA using LiteVNA
+
+## Calibration
+
+### Type of Calibration
+
+1. Reference Calibration
+
+ 基准校准是通过标准的开路、短路和负载器(Load)标准件来进行校准;因为这些标准件已经知道它们的[S参数](signal_processing/basic_knowledge/concept/scattering_parameters.md)响应,因此可以用来校准
+
+ 在LiteVNA产品中,
+ * 中间没有内针的为开路校准件
+ * 中间有内针但是内针周边为黄色金属填充的为短路校准件
+ * 中间有内针但是内针周边为白色填充的为标准50欧姆校准件
+
+2. Insertion Loss Calibration
+
+ 这个步骤涉及使用已知的插入损耗标准件,通常是一段特定长度的电缆。NanoVNA测量这个标准件的响应,并使用它来校准插入损耗。这有助于确保NanoVNA在测量传输系数(S21)时考虑到了电缆等元件的损耗。
+
+3. **Short-Open-Load-Thru Calibration**
+ SOLT校准是一种综合的校准方法,结合了基准校准和插入损耗校准。在这种校准中,短路(Short)、开路(Open)、负载(Load)以及参考标准件(Thru)都用于校准系统。NanoVNA测量这些标准件,并使用它们来校准系统以消除误差。
+
+ 我们所使用的LiteVNA就是通过这种方法进行校准的
+
+4. Return Loss Calibration
+ 这个校准步骤用于测量端口的回传损耗(Return Loss)。通常,你需要连接一个回传损耗标准负载到NanoVNA的端口,然后测量其回传损耗。NanoVNA可以使用这个信息来校准测量反射系数(S11)。
+
+### Procedures
+
+使用SOLT Calibration校准法
+
+1. 链接开路校准件至PORT1,点击`开路`项
+2. 链接短路校准件至PORT1,点击`短路`项
+3. 链接50欧姆校准件至PORT1,点击`负载`项
+4. 链接50欧姆校准件至PORT1,点击`隔离`项
+5. 对接线链接PORT1与PORT2,点击`直通`项
+
+### Verify Calibration
+
+可以使用[Smith Graph](signal_processing/basic_knowledge/concept/smith_graph.md)来验证我们的Calibration
+
+开路状态下,Smith Graph的标记点应该在电阻线的最右端,表明阻抗无限大,且表现出纯电阻性
+
+
+
+PORT1链接短路校准件,查看史密斯图标记点应该在史密斯图上电阻线的最左端(阻抗为0,并且表现纯电阻性)。
+
+
+
+PORT1链接50欧姆校准件,查看史密斯图标记点应该在史密斯图上电阻线的中心(阻抗为50欧姆,并且表现纯电阻性)。
+
+
+
+
+链接一根可以确认阻抗与谐振都正常的天线(可以把一根天线定位对照组并妥善保管),可以通过拨轮移动标记点至[驻波比](signal_processing/basic_knowledge/concept/SWR.md)最低点,并同步观察该频率在史密斯图上的点是否在正中心(或者无限接近中心)。同时可以看屏幕最上面的参数,如图显示,我的这条对照天线最好的驻波比为1.021,此时对应的频率2.455GHz,史密斯图中阻抗为50.72Ω+j748mΩ
+
+
+
+###
+
+# Reference
+
+* [_【RF专题研习】LiteVNA(网络分析仪)的初次使用与校准 - 野驴实验室_. https://blog.yelvlab.cn/archives/667/. Accessed 7 Oct. 2023.](https://blog.yelvlab.cn/archives/667/)
+
+
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231007162754.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162754.png
new file mode 100644
index 000000000..243ac7d00
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162754.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231007162817.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162817.png
new file mode 100644
index 000000000..4dc08894e
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162817.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231007162824.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162824.png
new file mode 100644
index 000000000..89e41c722
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162824.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231007162826.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162826.png
new file mode 100644
index 000000000..89e41c722
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162826.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231007162914.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162914.png
new file mode 100644
index 000000000..26123fa55
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231007162914.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231102154725.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231102154725.png
new file mode 100644
index 000000000..de9ab29a2
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231102154725.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204110242.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204110242.png
new file mode 100644
index 000000000..5586ac659
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204110242.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204112611.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204112611.png
new file mode 100644
index 000000000..646192e6c
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204112611.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204113603.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204113603.png
new file mode 100644
index 000000000..a88319113
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204113603.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204114304.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204114304.png
new file mode 100644
index 000000000..3090684ab
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204114304.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204114312.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204114312.png
new file mode 100644
index 000000000..8e4a3ee8a
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204114312.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204130544.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204130544.png
new file mode 100644
index 000000000..feac3d561
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204130544.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204130617.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204130617.png
new file mode 100644
index 000000000..4599fc63c
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204130617.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204153959.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204153959.png
new file mode 100644
index 000000000..1b8f910d9
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204153959.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204154015.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204154015.png
new file mode 100644
index 000000000..1ee9cf870
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204154015.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204160536.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204160536.png
new file mode 100644
index 000000000..0dcc5b069
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204160536.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204160640.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204160640.png
new file mode 100644
index 000000000..6a1e3d562
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204160640.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204163238.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204163238.png
new file mode 100644
index 000000000..cfb309d36
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204163238.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204163255.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204163255.png
new file mode 100644
index 000000000..c5f10bf2c
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204163255.png differ
diff --git a/content/signal_processing/device_and_components/attachments/Pasted image 20231204165012.png b/content/signal_processing/device_and_components/attachments/Pasted image 20231204165012.png
new file mode 100644
index 000000000..f547f1aab
Binary files /dev/null and b/content/signal_processing/device_and_components/attachments/Pasted image 20231204165012.png differ
diff --git a/content/signal_processing/device_and_components/cable/AWG.md b/content/signal_processing/device_and_components/cable/AWG.md
new file mode 100644
index 000000000..20b82d55d
--- /dev/null
+++ b/content/signal_processing/device_and_components/cable/AWG.md
@@ -0,0 +1,19 @@
+---
+title: American Wire Gauge
+tags:
+ - cable
+ - protocols
+---
+AWG是American Wire Gauge的缩写,它是一种用于表示电线和电缆导线直径大小的标准。AWG标准是在美国制定的,常用于美国、加拿大和其他一些国家。
+
+AWG标准中,导线的直径越大,AWG编号就越小。AWG编号是递减的,例如,AWG 20的导线直径比AWG 18的导线直径要大。
+
+根据AWG标准,导线尺寸可以通过一个公式进行计算。以AWG 20为例,其导线直径为0.8128毫米(0.032英寸),每个AWG编号的直径变化是由公式如下计算而得:
+
+$$
+d(n) = 0.127 × 92^((36-n)/39)
+$$
+
+其中,d(n)表示编号为n的AWG导线的直径。
+
+AWG标准在电子、电气和通信领域中广泛使用,用于指定导线或电缆的尺寸,以确保适当的电流容量和电阻特性。在选择电线、电缆或导线时,了解AWG编号可以帮助确定其直径大小和相应的电气特性。
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/cable/attachments/Pasted image 20231205144443.png b/content/signal_processing/device_and_components/cable/attachments/Pasted image 20231205144443.png
new file mode 100644
index 000000000..9fa0cec17
Binary files /dev/null and b/content/signal_processing/device_and_components/cable/attachments/Pasted image 20231205144443.png differ
diff --git a/content/signal_processing/device_and_components/cable/cable.md b/content/signal_processing/device_and_components/cable/cable.md
new file mode 100644
index 000000000..82cea903f
--- /dev/null
+++ b/content/signal_processing/device_and_components/cable/cable.md
@@ -0,0 +1,300 @@
+---
+title: Cable Study
+tags:
+ - signal
+ - equipment
+ - devices
+---
+# Structure
+
+
+
+* Conductor is located at the center of the cable
+* Other layers is to protect
+
+## Conductor Material
+
+* Sliver
+* Copper
+* Aluminum
+* Nickel
+* Tin
+
+### Aluminum
+
+#### Pros
+
+* Lightweight
+* Affordable
+* wide-range used in projects
+
+#### Cons
+
+* Less conductive than copper, another popular option
+
+### Copper
+
+#### Pros
+
+* Moves electricity quickly
+* inexpensive
+* versatile
+* In addition to being used bare, copper can also be dipped. Coating the copper with another metal can enhance specific qualities necessary for different applications.
+ * For example, in tinned copper, the layer of tin protects the copper from corrosion at high temperatures and makes the wire last longer.
+ * Tinned copper is easier to solder than bare copper, without a huge increase in cost.
+
+### Silver
+
+#### Pros
+
+* The most conductive
+* Using silver-plated wire
+ * Can reduce budget and can still provide many of the excellent conductive qualities of silver
+ * Wide temperature range,
+ * $-65 \degree \text{C} \leftrightarrow 200 \degree \text{C}$
+ * Used in aerospace applications
+#### Cons
+
+* High price
+
+### Nickel
+
+#### Pros
+
+* Used in Nickel-plated wire
+ * Operate in extreme conditions
+ * Wide temperature range
+ * If the nickel-plated is thick, it can withstand temperatures up to $750\degree \text{C}$
+* Excellent corrosion resistance
+
+## Conductor configure
+
+
+> [!tip]
+> Conductor is not what they're made of that makes them different, it's also how they're configured
+
+
+### Solid or Stranrded
+
+Chinese translation: 实心导体和绞合导体
+
+
+
Solid Conductor is in left, Stranded Conductor is in right
+
+
+#### Solid
+
+* Solid conductors are made of one piece of metal
+* Inexpensive
+* mechanically tough
+* Not flexible
+
+#### Stranded
+
+* Stranded conductors are made of several threads of metal
+* Flexible
+* a little more expensive
+* The Better flexibility can make a big difference in many applications
+
+> [!hint]
+> 根据前哥说的趋肤效应([Skin effect](https://zh.wikipedia.org/wiki/%E9%9B%86%E8%86%9A%E6%95%88%E6%87%89)),高频信号的电子喜欢在金属表面移动,因此实心导体可能已经被淘汰了。
+>
+> [skin effect note](signal_processing/device_and_components/cable/skin_effect.md)
+
+### Stranded Constructions
+
+* Bunched Stranded Conductor
+* Concentric Lay Stranded Conductor
+* Uni-lay Stranded Conductor
+* Rope-lay Stranded Conductor
+
+#### Bunched Stranded Conductor
+
+
+
+
+
+Bunched strands are simply gathered together without any specific arrangement.
+
+* Inexpensive
+
+#### Concentric Lay Stranded Conductor
+
+
+
+Concentric stranding (同心绞合)
+
+In concentric stranding, the layers alternate in terms of their twist direction.
+
+
+### Uni-lay Stranding Conductor
+
+Uni-lay Stranding (单向绞合)
+
+In uni-lay stranding, every layer is twisted in the same direction.
+
+> [!info]
+> Concentric stranded construction provides **better shielding and resistance to external electromagnetic interference**. This type of construction is often used for cables that **require better electromagnetic compatibility (EMC)**, such as in **wireless communications** or **in electromagnetically sensitive environments**.
+>
+> Uni-lay stranded constructions are typically more **flexible for scenarios that require cables to be bent and moved frequently**. This type of construction is commonly used in applications such as flexible cables and connecting cables for **mobile equipment**.
+
+
+### Rope Lay Stranding Conductor
+
+"Rope lay" is a type of stranded construction for cables or wires where the conductors or strands are twisted together **in a rope-like fashion**. This type of stranded construction is typically used in cables with **large diameters and high strength requirements** to provide greater flexibility and durability.
+
+In a rope lay construction, the stranded conductors or strands are arranged in a spiral fashion to form a rope-like structure. This is a departure from the traditional uni-lay or multi-lay construction.
+
+
+
+
+
+
+
+# Cable Structure
+
+
+
+
+
+1. **Standard Conductor(标准导体):**
+
+ - **功能:** 标准导体是电缆中的金属导体,通常采用铜或铝,用于传导电流。
+ - **特点:** 导体的直径和材料的选择取决于电缆的用途、电流负荷以及其他设计要求。
+2. **Conductor Screen(导体屏蔽):**
+
+ - **功能:** 导体屏蔽是一层半导体材料,包裹在导体表面,旨在提供电场屏蔽,减小电缆内的电场梯度。
+ - **特点:** 通常采用半导体橡胶或半导体聚乙烯作为导体屏蔽材料。
+3. **Insulation(绝缘):**
+
+ - **功能:** 绝缘是包围在导体周围的材料,用于阻止电流流失,防止导体之间或导体与地之间发生短路。
+ - **材料:** 常见的绝缘材料包括聚乙烯(PE)、聚氯乙烯(PVC)、交联聚乙烯(XLPE)等。
+4. **Insulation Screen(绝缘屏蔽):**
+
+ - **功能:** 绝缘屏蔽是绝缘层外的一层半导体材料,有助于提高电缆的电场均匀分布。
+ - **材料:** 通常采用与绝缘相似的半导体材料。
+5. **Metallic Sheath(金属护套):**
+
+ - **功能:** 金属护套是电缆的外部保护层,提供机械强度、防水、抗化学腐蚀等保护。
+ - **材料:** 金属护套通常由铝或镀锌钢带制成,提供额外的电磁屏蔽。
+6. **Bedding(铺底层):**
+
+ - **功能:** 铺底层是位于绝缘层和金属护套之间的一层材料,提供机械保护和防水功能。
+ - **材料:** 常见的铺底层材料包括聚乙烯(PE)、聚氯乙烯(PVC)等。
+7. **Armouring(铠装层):**
+
+ - **功能:** 铠装层提供额外的机械强度,使电缆能够抵抗外部的物理损伤,如挤压、拉伸等。
+ - **类型:** 铠装可以是钢丝铠装(SWA)或铝丝铠装(AWA),根据需要选择。
+8. **Serving(编织层):**
+
+ - **功能:** Serving是电缆的编织层,位于导体或绝缘层与护套之间,提供机械强度、防护绝缘层的功能,改善电缆的柔韧性和耐弯曲性能。
+ - **材料:** Serving通常由金属线、纤维线或其他合适的材料编织而成。
+9. **Jacket(护套):**
+
+ - **功能:** 护套是电缆的最外层,提供额外的机械保护、防水、耐化学腐蚀等性能。
+ - **材料:** 常见的护套材料包括聚氯乙烯(PVC)、聚乙烯(PE)、低烟无卤(LSZH)、聚氨酯(PUR)等。
+
+
+## Shield (or screen)
+
+> [!tip]
+> 在电缆术语中,"shield"(屏蔽层)和"screen"(屏蔽层)有时可以互换使用,具体取决于上下文和地区的说法。通常情况下,这两个术语都指的是电缆中用于提供电磁屏蔽的层。下面对这两个术语进行简要澄清:
+>
+> 1. **Shield(屏蔽层):**
+> - "Shield" 是一个通用术语,用于描述电缆中的各种用于屏蔽的层。这可以包括对导体周围的屏蔽、对绝缘层的屏蔽等。因此,当说到屏蔽层时,可以用"shield"这个术语,而这个层可能是导体屏蔽、绝缘屏蔽或其他形式的屏蔽。
+> 2. **Screen(屏蔽层):**
+> - "Screen" 是电缆领域中一种更常见的术语,通常用于指代屏蔽层。在某些地区,人们更倾向于使用"screen"这个词来描述电缆中的屏蔽。同样,这可能包括对导体周围的屏蔽、对绝缘层的屏蔽等。
+>
+> 总体而言,这两个术语在许多情况下可以互换使用,但具体的使用可能会受到地区和行业的影响。在一些国家或特定标准中,可能更倾向于使用其中一个术语而不是另一个。因此,在特定上下文中,最好查看相关的标准或规范以确定使用的确切术语。
+
+
+### Shield Type
+
+1. **Foil Shield(箔屏蔽):**
+
+ - **结构:** 箔屏蔽是由一层薄金属箔(通常是铝)构成的。这层箔紧贴在绝缘层或导体屏蔽的外表面。
+ - **特点:** 箔屏蔽提供了对高频电磁干扰的有效屏蔽,并且相对轻巧灵活。它通常用于对频率要求较高的应用中,例如通信电缆。
+2. **Braid Shield(编织屏蔽):**
+
+ - **结构:** 编织屏蔽由金属线(通常是铜或铝)编织而成,覆盖在绝缘层或导体屏蔽的外表面。
+ - **特点:** 编织屏蔽提供了较好的机械强度和耐挠性,适用于柔性电缆或需要频繁弯曲的场合。它也提供了良好的电磁屏蔽效果。
+3. **Spiral Shield(螺旋屏蔽):**
+
+ - **结构:** 螺旋屏蔽由金属带或金属线以螺旋状绕绕在电缆的绝缘层或导体屏蔽上。
+ - **特点:** 螺旋屏蔽相对于编织屏蔽来说更容易施工,但在机械强度和电磁屏蔽性能方面可能略逊一筹。
+4. **Combination Shield(组合屏蔽):**
+
+ - **结构:** 组合屏蔽是指在同一电缆中使用多种不同类型的屏蔽,例如在导体上先使用箔屏蔽,然后外面再套用编织屏蔽。
+ - **特点:** 组合屏蔽结合了不同类型屏蔽的优势,提供了更全面的电磁屏蔽效果,适用于一些对电磁兼容性要求较高的应用。
+
+### Shield vs. Armor
+
+> [!tip]
+> Shield vs. Armor
+>
+> Shield is to protect the integrity of the signal armor is primarily used to prevent physical damage to the cable itself.
+
+
+# Letters on Cable
+
+
+
+## Size
+
+* AWG - [American Wire Gauge](signal_processing/device_and_components/cable/AWG.md)
+* $mm^2$ - Square millimeters
+* MCM - Thousand Circular Mils
+* KCMil - Thousand Circular Mils
+
+1MCM = 1KCMil = 0.5067 $mm^2$
+
+## Insulation type and application
+
+* T - Thermoplastic
+* E - Elastomer
+* R - Rubber
+* H - Temperature resistance of 75$\degree \text{C}$
+* HH - Heat resistance of 90$\degree\text{C}$
+* N - Nylon covered
+* O - Oil Resistant Jacket
+* OO - Oil Resistant Jacket & Oil Resistant Insulation
+* W - Weather and Water resistance, -60$\degree\text{C}$
+* PV - Photovoltaic / Solar power cable, temperature resistance of 105$\degree\text{C}$
+* XLPE - Cross-linked polyethylene, for medium and high voltage purpose (600V - 11kV above)
+* MI - Mineral Insulated, fire retardant
+
+## Rated Voltages
+
+* S - Service, the cable is rated to 600 volts
+* J - Junior, the cable is rated to 300 volts
+* Number
+
+## Quality Control Certified
+
+* [UL, TUV, ISO ... ...](signal_processing/device_and_components/quality_control_certified/qcc.md)
+
+# Cable Properties - Especially for RF circuit
+
+## Intro
+
+**RF cables are an often overlooked part of the whole RF setup**, so you've got your right kit you've got your right frequencies, you've tuned it up all good you placed your antennas in the right place, but it's very very very easy to shoot yourself in the foot by selecting the wrong antenna cables.
+
+RF cables are quite different to audio cables. As in audio cables we can run cables for a long long long distance without actually inducing much loss. At all RF cables are relatively lossy by comparison the amount of loss you get inside of a cable will be dependent on three different things.
+
+* Quality of the cable
+* Frequency you're currently trying to transmit
+* The length of the cable
+
+RF circuits need to consider impedance matching, and the most likely to fluctuate in impedance is the cable. So the antenna cable we used for our radio systems is usually **[coax cable](signal_processing/device_and_components/cable/coax_cable.md) with a nice BNC connector**.
+
+
+
+# Reference
+
+* [_Cable Basics 101: Conductors - Brought to You by Allied Wire & Cable_. _www.youtube.com_, https://www.youtube.com/watch?v=gtAaZ2hFYTA. Accessed 4 Dec. 2023.](https://www.youtube.com/watch?v=gtAaZ2hFYTA)
+* [_Cable Construction | Design & Technology | Atlas Cables_. https://www.atlascables.com/design-construction.html. Accessed 4 Dec. 2023.](https://www.atlascables.com/design-construction.html)
+* [“集膚效應.” 维基百科,自由的百科全书, 21 Aug. 2022. _Wikipedia_, https://zh.wikipedia.org/w/index.php?title=%E9%9B%86%E8%86%9A%E6%95%88%E6%87%89&oldid=73309042.](https://zh.wikipedia.org/wiki/%E9%9B%86%E8%86%9A%E6%95%88%E6%87%89)
+* [_Cable Properties - CompTIA Network+ N10-004: 2.1_. _www.youtube.com_, https://www.youtube.com/watch?v=2DLwqWDg2uw. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=2DLwqWDg2uw)
+* [_How to Read Wire and Cable Markings._ _www.youtube.com_, https://www.youtube.com/watch?v=O1iooveG3-4. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=O1iooveG3-4)
+* [_Understanding RF Cables_. _www.youtube.com_, https://www.youtube.com/watch?v=y2kroLxp3ZQ. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=y2kroLxp3ZQ)
+* [_Coaxial Cable - Lesson 2_. _www.youtube.com_, https://www.youtube.com/watch?v=HD5ODAdl1Ow. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=HD5ODAdl1Ow)
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/cable/coax_cable.md b/content/signal_processing/device_and_components/cable/coax_cable.md
new file mode 100644
index 000000000..75303835f
--- /dev/null
+++ b/content/signal_processing/device_and_components/cable/coax_cable.md
@@ -0,0 +1,20 @@
+---
+title: Coaxial Cable
+tags:
+ - devices
+ - equipment
+ - cable
+---
+
+
+
+
+
+A coaxial cable as a transmission line consisting of an inner conducting wire of radius A and an outer conducting sheath of radius B. The space between the two conductors is filled with a dielectric. The fields are entirely contained internally, so coaxial cables are completely protected from outside interference. However, they are difficult to fabricate, [unbalanced](signal_processing/device_and_components/cable/coax_cable_imbalance.md) and lossy over long distances, so their use is constrained to close range applications.
+
+
+
+# Reference
+
+* [_Coaxial Cable - Lesson 2_. _www.youtube.com_, https://www.youtube.com/watch?v=HD5ODAdl1Ow. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=HD5ODAdl1Ow)
+* [_Why Is Coax Unbalanced?_ _www.youtube.com_, https://www.youtube.com/watch?v=D-DKuye6ODg. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=D-DKuye6ODg)
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/cable/coax_cable_imbalance.md b/content/signal_processing/device_and_components/cable/coax_cable_imbalance.md
new file mode 100644
index 000000000..93d73bf39
--- /dev/null
+++ b/content/signal_processing/device_and_components/cable/coax_cable_imbalance.md
@@ -0,0 +1,25 @@
+---
+title: Why coaxial cable is imbalance
+tags:
+ - signal
+ - equipment
+ - devices
+ - cable
+---
+
+# You should know:
+
+
+
+# Key Point in this story
+
+1. Current flows in closed circuits
+2. Electromagnetic radiation
+3. field cancellation
+4. skin effect
+
+
+
+# Reference
+
+* [_Why Is Coax Unbalanced?_ _www.youtube.com_, https://www.youtube.com/watch?v=D-DKuye6ODg. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=D-DKuye6ODg)
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/cable/skin_effect.md b/content/signal_processing/device_and_components/cable/skin_effect.md
new file mode 100644
index 000000000..134f071fd
--- /dev/null
+++ b/content/signal_processing/device_and_components/cable/skin_effect.md
@@ -0,0 +1,26 @@
+---
+title: Skin effect
+tags:
+ - devices
+ - equipment
+ - signal
+ - signal-processing
+---
+# Simple review
+
+皮肤效应(skin effect)是指在高频交流电中,电流主要流经导体表面附近的一层区域,而不是均匀地分布在整个导体截面上。这导致了在高频率下,电流主要集中在导体表面,而导体内部的电流密度相对较小。
+
+对于电缆和导体来说,皮肤效应的影响在较高频率时会更加显著。*一般而言,当频率达到几百千赫兹(kHz)或更高时,皮肤效应开始显现*。具体来说,频率越高,电流就越倾向于分布在导体表面。
+
+在设计和选择电缆时,特别是在高频应用中,需要考虑皮肤效应的影响,因为它可能导致以下问题:
+
+1. **电阻增加:** 由于电流主要流经导体表面,有效的导体截面减小,导致电阻增加。
+
+2. **能量损耗:** 由于电流集中在表面,导致了能量在电缆内部的损失增加。
+
+3. **信号失真:** 在高频率下,由于皮肤效应,信号可能受到失真,因为不同频率的成分以不同的方式分布在导体上。
+
+
+对于高频电缆设计,可能会采取一些措施来减缓或抵消皮肤效应的影响,比如使用多股绞合的导体,采用银镀导体等。在一些特殊的高频应用中,甚至可能会考虑采用空心导体等设计来降低电阻和损耗。
+
+总体而言,具体考虑皮肤效应的频率阈值取决于应用的具体要求和设计标准。
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/feeding_tech/feeding_tech.md b/content/signal_processing/device_and_components/feeding_tech/feeding_tech.md
new file mode 100644
index 000000000..a2192abbb
--- /dev/null
+++ b/content/signal_processing/device_and_components/feeding_tech/feeding_tech.md
@@ -0,0 +1,11 @@
+---
+title: Antenna Feeding Tech
+tags:
+ - signal-processing
+ - antenna
+---
+
+
+# Reference
+
+* [_Antenna Matching with a Vector Network Analyzer_. https://www.tek.com/en/blog/antenna-matching-vector-network-analyzer. Accessed 13 Dec. 2023.](https://www.tek.com/en/blog/antenna-matching-vector-network-analyzer)
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/op_amp.md b/content/signal_processing/device_and_components/op_amp.md
new file mode 100644
index 000000000..90e73a832
--- /dev/null
+++ b/content/signal_processing/device_and_components/op_amp.md
@@ -0,0 +1,8 @@
+---
+title: Operational Amplifier
+tags:
+ - signal
+---
+运算放大器
+
+To be written...
\ No newline at end of file
diff --git a/content/signal_processing/device_and_components/quality_control_certified/qcc.md b/content/signal_processing/device_and_components/quality_control_certified/qcc.md
new file mode 100644
index 000000000..9bbad4a3b
--- /dev/null
+++ b/content/signal_processing/device_and_components/quality_control_certified/qcc.md
@@ -0,0 +1,17 @@
+---
+title: Quality Control Certified
+tags:
+ - devices
+ - equipment
+ - protocols
+---
+Quality Control Certified是指通过一系列认证和标准评估的质量控制体系,其中UL(Underwriters Laboratories)、TUV(Technischer Überwachungsverein)和ISO(International Organization for Standardization)是三个常见的认证和标准机构。下面是它们的简要介绍:
+
+1. UL认证:UL是一个独立的安全科学机构,总部位于美国。UL认证是一种针对产品安全性和性能的评估认证,帮助确保产品符合特定的安全标准和规定。UL认证通常是为了在市场上证明产品的安全性和质量,涵盖了多个领域,如电气、电子、建筑材料、消防安全等。UL认证标志是在产品上显示的一个重要标志,表示产品符合UL的验证要求。
+
+2. TUV认证:TUV是一个德国的独立认证机构,提供各种产品认证和质量管理系统认证服务。TUV认证通常关注产品的质量、安全和符合性。它涵盖了多个领域,包括机械、电气、电子、化学、医疗器械等。TUV的认证标志是在产品上显示的一个重要标志,表示产品通过了TUV的质量评估和认证。
+
+3. ISO认证:ISO是一个国际性的标准化组织,致力于制定各种行业的国际标准。ISO认证是指组织或企业通过实施并符合ISO制定的特定标准,获得的质量管理体系认证。ISO 9001是最常见的ISO认证类型,在质量管理方面提供了一套规范和最佳实践。它涵盖了企业的各个方面,包括质量控制、流程管理、客户满意度、供应链管理等。ISO认证可以帮助组织或企业提高质量管理水平,增强客户信心,并满足国际市场的质量要求。
+
+
+这些质量控制认证标志代表了产品和组织达到一定标准的认可和验证。接受这些认证可以提高产品的市场竞争力、客户信任度和组织的质量管理水平。
\ No newline at end of file
diff --git a/content/signal_processing/envelope/attachments/Pasted image 20240102150350.png b/content/signal_processing/envelope/attachments/Pasted image 20240102150350.png
new file mode 100644
index 000000000..07252a241
Binary files /dev/null and b/content/signal_processing/envelope/attachments/Pasted image 20240102150350.png differ
diff --git a/content/signal_processing/envelope/attachments/Pasted image 20240102155308.png b/content/signal_processing/envelope/attachments/Pasted image 20240102155308.png
new file mode 100644
index 000000000..fb3e5eff6
Binary files /dev/null and b/content/signal_processing/envelope/attachments/Pasted image 20240102155308.png differ
diff --git a/content/signal_processing/envelope/attachments/Pasted image 20240103160713.png b/content/signal_processing/envelope/attachments/Pasted image 20240103160713.png
new file mode 100644
index 000000000..11a600ddc
Binary files /dev/null and b/content/signal_processing/envelope/attachments/Pasted image 20240103160713.png differ
diff --git a/content/signal_processing/envelope/hilbert_transform.md b/content/signal_processing/envelope/hilbert_transform.md
new file mode 100644
index 000000000..df37cb825
--- /dev/null
+++ b/content/signal_processing/envelope/hilbert_transform.md
@@ -0,0 +1,95 @@
+---
+title: Hilbert Transform Envelope
+tags:
+ - signal-processing
+ - algorithm
+ - envelope
+---
+
+# Introduction
+
+
+
+# Envelope Explanation
+## Envelope and Fine Structure
+
+* Envelope:
+ * The envelope of a signal represents the slowly varying amplitude or outline of the signal. It provides a smooth curve that encapsulates the main shape of the signal, ignoring the rapid oscillations or fluctuations. The envelope is typically associated with the low-frequency components of a signal.
+* Fine Structure:
+ * The fine structure of a signal refers to the detailed, high-frequency components or rapid oscillations present in the signal. It captures the fast variations that occur on a shorter time scale compared to the envelope.
+
+
+# Algorithm Detail
+
+## History
+
+* 1905年---Hilbert在研究黎曼-希尔伯特问题时提出希尔伯特变换,而他关于离散希尔伯特变换的早期工作可追溯到他在哥根廷的讲课。
+* Hermann Weyl在他的学位论文中发表了离散希尔伯特变换的结论。
+* Schur改进了离散希尔伯特变换的结果,并将其扩展到了积分条件下。
+
+**而将Hilbert变换运用到信号处理中还得追溯到解析信号表达的建立。**
+
+> [!hint]
+> "传统经典的信号研究方法主要概括为基于傅里叶变换的谱分析、基于概率分布的统计分析和其它随机信号表示方法,同时还有起源于很早的典型谱、相关和分布特征,而这些分析方法研究的**一个基本考虑是将随机信号表达为两个独立函数的乘积**”
+>
+
+早期关于包络和瞬时相位的研究都是基于笛卡尔坐标系x-y
+
+
+
+有关系:
+$$
+\begin{align}
+A^2 & = x^2+y^2 \\
+\varphi & = \arctan{\frac{y}{x}}
+\end{align}
+$$
+这样的表达被引入傅里叶序列中,$x_k = \sum a_k\cos\varphi_k + b_k\sin\varphi_k$, 其幅度和相位均可由上面笛卡尔坐标系中的两个关系得到,此时坐标$(x,y)$就是$(a_k,b_k)$。 用这种方法研究调制信号的包络和瞬时相位依赖于一个伟大的公式:
+
+$$
+e^{i\varphi} = \cos{\varphi} + i\sin{\varphi}
+$$
+
+1946年, Gabor先生定义了复函数更一般化的欧拉公式
+
+$$
+Y(t) = u(t) + iv(t)
+$$
+这里的$v(t)$是$\mu(t)$的希尔伯特变换
+
+1998年,Huang在现代希尔伯特变换研究领域做出了显著性工作 —— EMD、HHT,使得希尔伯特变换理论在现代信号分析中遍地开花
+
+
+## Analytical Signal
+
+
+
+## Mathematical description
+
+The mathematical description of the Hilbert transform is to **rotate the Fourier components in complex area**.
+
+$$
+H(\mu)(t) = \frac{1}{\pi} \text{p.v.} \int_{\infty}^{\infty} \frac{\mu(t)}{t-\tau}d\tau
+$$
+
+
+
+
+The Hilbert transform is given by the [Cauchy principal value](Math/real_analysis/cauchy_principal_value.md) of the convolution with the function $1/(\pi t)$.
+
+## Geometrical meaning of HT
+
+
+
+
+
+
+
+# Reference
+
+* [Mathuranathan. “Extract Envelope, Phase Using Hilbert Transform: Demo.” _GaussianWaves_, 24 Apr. 2017, https://www.gaussianwaves.com/2017/04/extract-envelope-instantaneous-phase-frequency-hilbert-transform/.](https://www.gaussianwaves.com/2017/04/extract-envelope-instantaneous-phase-frequency-hilbert-transform/)
+* [_CFC: What Does the Hilbert Transform Do? (V9)_. _www.youtube.com_, https://www.youtube.com/watch?v=-CjnFEOopfw. Accessed 2 Jan. 2024.](https://www.youtube.com/watch?v=-CjnFEOopfw)
+* [_Extract Envelope and Fine Structure in Praat Using the Hilbert Transform_. _www.youtube.com_, https://www.youtube.com/watch?v=qp1G3a2g8r0. Accessed 2 Jan. 2024.](https://www.youtube.com/watch?v=qp1G3a2g8r0)
+* [“希尔伯特变换与瞬时频率问题--连载(一).” 知乎专栏, https://zhuanlan.zhihu.com/p/25213895. Accessed 2 Jan. 2024.](https://zhuanlan.zhihu.com/p/25213895)
+* [_The Hilbert Transform_. _www.youtube.com_, https://www.youtube.com/watch?v=VyLU8hlhI-I. Accessed 3 Jan. 2024.](https://www.youtube.com/watch?v=VyLU8hlhI-I)
+
diff --git a/content/signal_processing/filter/attachments/Pasted image 20240108161455.png b/content/signal_processing/filter/attachments/Pasted image 20240108161455.png
new file mode 100644
index 000000000..886a89e21
Binary files /dev/null and b/content/signal_processing/filter/attachments/Pasted image 20240108161455.png differ
diff --git a/content/signal_processing/filter/attachments/Pasted image 20240108161800.png b/content/signal_processing/filter/attachments/Pasted image 20240108161800.png
new file mode 100644
index 000000000..9648173ff
Binary files /dev/null and b/content/signal_processing/filter/attachments/Pasted image 20240108161800.png differ
diff --git a/content/signal_processing/filter/chebyshev_filter.md b/content/signal_processing/filter/chebyshev_filter.md
new file mode 100644
index 000000000..e829c0d78
--- /dev/null
+++ b/content/signal_processing/filter/chebyshev_filter.md
@@ -0,0 +1,120 @@
+---
+title: Chebyshev Filter
+tags:
+ - signal-processing
+ - filter
+---
+
+# History
+
+## Chebyshev polynomials
+
+### Objective
+
+
+切比雪夫多项式(Chebyshev Polynomials)是19世纪俄国数学家彼得·切比雪夫(Pafnuty Chebyshev)在19世纪中期引入的,这些多项式具有广泛的应用,并在数学和工程领域中发挥着重要的作用。
+
+切比雪夫多项式的发明主要有以下几个意义和目的:
+
+1. **逼近论和插值:** 切比雪夫多项式在逼近论和插值中具有重要作用。它们是一组正交多项式,可以用来逼近和插值函数。在逼近论中,人们可以使用切比雪夫逼近将复杂的函数用一组切比雪夫多项式的有限和来表示,这对于数值计算和数据分析非常有用。使用第一类chebyshev多项式的根,也被叫做chebyshev节点来进行多项式插值,可以降低[龙格现象](https://zh.wikipedia.org/wiki/%E9%BE%99%E6%A0%BC%E7%8E%B0%E8%B1%A1),并且提供多项式在连续函数的最佳一致逼近
+
+2. **最小二乘法:** 切比雪夫多项式在最小二乘法中有广泛的应用。由于它们的正交性质,它们能够提供最小二乘逼近的有效工具。在拟合实验数据或解决非线性最小二乘问题时,切比雪夫多项式可以作为基函数。
+
+3. **数值分析:** 切比雪夫多项式在数值分析中用于构建有效的数值方法,特别是在数值积分和微分方程的数值解法中。它们的性质使得在数值计算中可以更精确地处理一些数学问题。
+
+4. **波动现象:** 切比雪夫多项式在研究振动和波动现象中也发挥了重要的作用。它们出现在许多物理学和工程学的问题中,特别是在描述周期性运动和振动的情况下。
+
+
+### Definition
+
+Chebyshev多项式来自于以下两个[Chebyshev微分方程](https://zh.wikipedia.org/wiki/%E5%88%87%E6%AF%94%E9%9B%AA%E5%A4%AB%E6%96%B9%E7%A8%8B)的解,
+
+$$
+(1-x^2)y'' - xy' + n^2 y = 0
+$$
+$$
+(1-x^2)y'' -3xy'+n(n+2)y = 0
+$$
+
+
+这类方程的解为幂级数,$y=\sum_{n=0}^{\infty}a_nx^n$,且系数据有递推关系,$a_{n+2} = \frac{(n-p)(n+p)}{(n+1)(n+2)}a_n$
+
+因此我们得到**第一类切比雪夫多项式**和**第二类切比雪夫多项式**
+
+#### 第一类切比雪夫多项式
+
+
+$$
+\begin{equation}
+\begin{split}
+T_0(x) & = 1 \\
+T_1(x) & = x \\
+T_{n+1}(x) & = 2xT_n(x)-T_{n-1}(x)
+\end{split}
+\end{equation}
+$$
+
+#### 第二类切比雪夫多项式
+
+$$
+\begin{equation}
+\begin{split}
+U_0(x) & = 1 \\
+U_1(x) & = 2x \\
+U_{n+1}(x) & = 2xU_n(x) - U_{n-1}(x)
+\end{split}
+\end{equation}
+$$
+
+
+
+### 正交性
+
+$$
+\int_{-1}^1 T_n(x)T_m(x)\frac{dx}{\sqrt{1-x^2}} =
+\begin{cases}
+0 &:n \not = m \\
+\pi &: n=m=0 \\
+\frac{\pi}{2} & : n=m \not = 0
+\end{cases}
+$$
+
+$$
+\int_{-1}^1 U_n(x)U_m(x)\sqrt{1-x^2}dx =
+\begin{cases}
+0 &:n \not = m \\
+\frac{\pi}{2} & : n=m
+\end{cases}
+$$
+
+### Chebyshev Root
+
+两类的_n_次切比雪夫多项式在区间[−1,1]上都有_n_ 个不同的根, 称为**切比雪夫根**, 有时亦称做切比雪夫节点 ,因为是多项式插值时的 _插值点_ 。
+
+$T_n$的$n$个根为:
+
+$$
+x_i = \cos{(\frac{2i-1}{2n}\pi)}
+$$
+
+$U_n$的$n$个根分别是:
+$$
+x_i = \cos{(\frac{i}{n+1}\pi)}
+$$
+
+
+
+
+# Chebyshev Filter
+
+## Type I Chebyshev Filter
+
+The gain response:
+
+$$
+G_n(\omega) = |H_n(j\omega)| = \frac{1}{\sqrt{1+\varepsilon^2T_n^2(\omega/\omega_0)}}
+$$
+
+* $\varepsilon$ - ripple factor
+* $\omega_0$ - cutoff frequency
+* $T_n$ - Chebyshev polynomial
diff --git a/content/signal_processing/impulse_generating/gaussian_impulse.md b/content/signal_processing/impulse_generating/gaussian_impulse.md
new file mode 100644
index 000000000..a8aae3462
--- /dev/null
+++ b/content/signal_processing/impulse_generating/gaussian_impulse.md
@@ -0,0 +1,198 @@
+---
+title: Gaussian Impulse Generating
+tags:
+ - UWB
+ - signal-processing
+ - signal-generating
+---
+# Equation
+
+sinusoidal signal modulation:
+
+$$
+x(t) = A \cdot e^{(j2\pi ft)} \cdot e^{[-\frac{1}{2\sigma^2} \cdot (t-t_0)^2]}
+$$
+Let's consider the case in which the gaussian peak is at 0 and the total amplitude is 1:
+
+$$
+x(t) = e^{(j2\pi ft)} \cdot e^{[-\frac{1}{2\sigma^2} \cdot t^2]}
+$$
+So, considering the Fourier transform properties:
+
+$$
+e^{-\frac{t^2}{2\sigma^2}} \leftrightarrow \sigma\sqrt{2\pi} e^{\frac{-\sigma^2 \omega^2}{2}}
+$$
+
+$$
+e^{i\omega_0 t} f(t) = F(\omega - \omega_0)
+$$
+
+So, the spectrum of sinusoidal signal modulation gaussian pulse:
+
+$$
+X(\omega) = \sigma \sqrt{2\pi} e^{\frac{-\sigma^2(\omega - f_c)^2}{2}}
+$$
+
+## Equation in scipy.signal.gausspulse
+
+$$
+e^{-a t^2} e^{-j2\pi f_c t}
+$$
+
+here's pair:
+
+$$
+\mathcal{F}[e^{-ax^2}](f) = \sqrt{\frac{\pi}{a}} e^{-\pi^2 \frac{f^2}{a}}
+$$
+and, here's scipy.signal.gausspulse source code:
+
+```python
+def gausspulse(t, fc=1000, bw=0.5, bwr=-6, tpr=-60, retquad=False,
+ retenv=False):
+ """
+ Return a Gaussian modulated sinusoid:
+
+ ``exp(-a t^2) exp(1j*2*pi*fc*t).``
+
+ If `retquad` is True, then return the real and imaginary parts
+ (in-phase and quadrature).
+ If `retenv` is True, then return the envelope (unmodulated signal).
+ Otherwise, return the real part of the modulated sinusoid.
+
+ Parameters
+ ----------
+ t : ndarray or the string 'cutoff'
+ Input array.
+ fc : float, optional
+ Center frequency (e.g. Hz). Default is 1000.
+ bw : float, optional
+ Fractional bandwidth in frequency domain of pulse (e.g. Hz).
+ Default is 0.5.
+ bwr : float, optional
+ Reference level at which fractional bandwidth is calculated (dB).
+ Default is -6.
+ tpr : float, optional
+ If `t` is 'cutoff', then the function returns the cutoff
+ time for when the pulse amplitude falls below `tpr` (in dB).
+ Default is -60.
+ retquad : bool, optional
+ If True, return the quadrature (imaginary) as well as the real part
+ of the signal. Default is False.
+ retenv : bool, optional
+ If True, return the envelope of the signal. Default is False.
+
+ Returns
+ -------
+ yI : ndarray
+ Real part of signal. Always returned.
+ yQ : ndarray
+ Imaginary part of signal. Only returned if `retquad` is True.
+ yenv : ndarray
+ Envelope of signal. Only returned if `retenv` is True.
+
+ See Also
+ --------
+ scipy.signal.morlet
+
+ Examples
+ --------
+ Plot real component, imaginary component, and envelope for a 5 Hz pulse,
+ sampled at 100 Hz for 2 seconds:
+
+ >>> import numpy as np
+ >>> from scipy import signal
+ >>> import matplotlib.pyplot as plt
+ >>> t = np.linspace(-1, 1, 2 * 100, endpoint=False)
+ >>> i, q, e = signal.gausspulse(t, fc=5, retquad=True, retenv=True)
+ >>> plt.plot(t, i, t, q, t, e, '--')
+
+ """
+ if fc < 0:
+ raise ValueError("Center frequency (fc=%.2f) must be >=0." % fc)
+ if bw <= 0:
+ raise ValueError("Fractional bandwidth (bw=%.2f) must be > 0." % bw)
+ if bwr >= 0:
+ raise ValueError("Reference level for bandwidth (bwr=%.2f) must "
+ "be < 0 dB" % bwr)
+
+ # exp(-a t^2) <-> sqrt(pi/a) exp(-pi^2/a * f^2) = g(f)
+
+ ref = pow(10.0, bwr / 20.0)
+ # fdel = fc*bw/2: g(fdel) = ref --- solve this for a
+ #
+ # pi^2/a * fc^2 * bw^2 /4=-log(ref)
+ a = -(pi * fc * bw) ** 2 / (4.0 * log(ref))
+
+ if isinstance(t, str):
+ if t == 'cutoff': # compute cut_off point
+ # Solve exp(-a tc**2) = tref for tc
+ # tc = sqrt(-log(tref) / a) where tref = 10^(tpr/20)
+ if tpr >= 0:
+ raise ValueError("Reference level for time cutoff must "
+ "be < 0 dB")
+ tref = pow(10.0, tpr / 20.0)
+ return sqrt(-log(tref) / a)
+ else:
+ raise ValueError("If `t` is a string, it must be 'cutoff'")
+
+ yenv = exp(-a * t * t)
+ yI = yenv * cos(2 * pi * fc * t)
+ yQ = yenv * sin(2 * pi * fc * t)
+ if not retquad and not retenv:
+ return yI
+ if not retquad and retenv:
+ return yI, yenv
+ if retquad and not retenv:
+ return yI, yQ
+ if retquad and retenv:
+ return yI, yQ, yenv
+
+```
+
+
+It means that,
+
+$$
+\text{Gaussian Pulse} = e^{-a t^2}
+$$
+$$
+a = -\frac{\pi^2 \times {f_c}^2 \times {bw}^2}{4 \times \log(10^{\frac{bwr}{20}})}
+$$
+
+
+So, we can get the spectrum function like that:
+
+```python
+def gaussian_spcturm(f, fc, bw, bwr):
+
+ ref = pow(10.0, bwr / 20.0)
+ a = -(math.pi * fc * bw) ** 2 / (4.0 * math.log(ref))
+
+ spectrum = np.exp(- (math.pi ** 2) * (2 * math.pi * f ** 2) / a)
+
+ f_positive = f[f >= 0]
+ f_negative = f[f < 0]
+
+ spectrum_positive = np.exp(- (math.pi ** 2) * ((f_positive - fc) ** 2) / a)
+ spectrum_negative = np.exp(- (math.pi ** 2) * ((f_negative + fc) ** 2) / a)
+ spectrum_modu = np.concatenate((spectrum_negative, spectrum_positive))
+
+ return spectrum, spectrum_modu
+```
+## Equation Detail Explain
+
+In function, we don't need user to input $\sigma$, we want user to input fractional bandwidth and reference level at which fractional bandwidth is calculated. Using this two parameter, `bw` and `bwr`, the example is `bwr` = -3dB, we can calculate $\sigma$
+
+So,
+$$
+B_{frac} = \frac{2f_{-3dB}}{f_{mod}}=\frac{\sqrt{ln(2)}}{\pi \cdot \sigma_T \cdot f_{mod}}
+$$
+
+
+
+
+# Reference
+
+* https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.gausspulse.html
+* https://dsp.stackexchange.com/questions/72497/fractional-bandwidth-of-a-gaussian-amplitude-modulated-signal
+* https://mathworld.wolfram.com/FourierTransformGaussian.html
\ No newline at end of file
diff --git a/content/signal_processing/radio_communication/Near_far_field.md b/content/signal_processing/radio_communication/Near_far_field.md
new file mode 100644
index 000000000..2bbca5ba5
--- /dev/null
+++ b/content/signal_processing/radio_communication/Near_far_field.md
@@ -0,0 +1,14 @@
+---
+title: Near Field vs. Far Field
+tags:
+ - signal
+ - radio
+ - physics
+ - electromagnetism
+---
+
+
+
+# Reference
+
+* [_EEVblog #1273 - EMC Near Field vs Far Field Explained_. _www.youtube.com_, https://www.youtube.com/watch?v=lYmfVMWbIHQ. Accessed 5 Dec. 2023.](https://www.youtube.com/watch?v=lYmfVMWbIHQ)
\ No newline at end of file
diff --git a/content/signal_processing/signal_processing_MOC.md b/content/signal_processing/signal_processing_MOC.md
new file mode 100644
index 000000000..fda9051aa
--- /dev/null
+++ b/content/signal_processing/signal_processing_MOC.md
@@ -0,0 +1,37 @@
+---
+title: Signal Processing - MOC
+tags:
+- MOC
+- signal-processing
+---
+# Basic
+
+* [Random Signal Basic](signal_processing/basic_knowledge/random_signal_basic.md)
+* [Fourier Transform](signal_processing/basic_knowledge/FT/fourier_transform.md)
+* [Power spectral density estimation](signal_processing/PSD_estimation/PSD_estimation.md)
+* [FBW - Fractional Band Width](signal_processing/basic_knowledge/concept/FBW.md)
+
+## Fourier Transform
+
+* [Fourier Transform](signal_processing/basic_knowledge/FT/fourier_transform.md)
+* [Fourier transform pairs and properties derivation](signal_processing/basic_knowledge/FT/fourier_transform_pairs_derivation.md)
+
+# Devices and Components
+
+* [✨Learn VNA in practical way](signal_processing/device_and_components/VNA_learn.md)
+* [Cable](signal_processing/device_and_components/cable/cable.md)
+
+# Signal Algorithm about
+
+## Envelope
+
+* [Hilbert Transform - Envelope](signal_processing/envelope/hilbert_transform.md)
+
+## Filter
+
+* [Chebyshev Filter](signal_processing/filter/chebyshev_filter.md)
+
+## Generating Impulse
+
+* [Gaussian Impulse Generating](signal_processing/impulse_generating/gaussian_impulse.md)
+
diff --git a/content/synthetic_aperture_radar_imaging/Antenna.md b/content/synthetic_aperture_radar_imaging/Antenna.md
new file mode 100644
index 000000000..82d1d665d
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/Antenna.md
@@ -0,0 +1,139 @@
+---
+title: Antenna
+tags:
+- SAR
+- physics
+- basic
+---
+
+# Theorem you need know
+
+* [🧷Resonant circuit](Physics/Electromagnetism/Resonant_circuit.md)
+
+# What is antenna
+
+A usually metallic device for radiating or receiving radio waves
+
+## A simple model representing antenna
+
+
+
+* $R_L$ 损耗电阻 - 介质与结构导致的损耗
+* $R_r$ 辐射电阻 - 与天线产生的辐射的能量关系密切
+* $X_A$ 电抗 - 描述天线近场电磁能转换的现象 (一般情况下$X_A$ = 0)
+
+> [!warning]
+> 天线还有一个很重要的损耗来源,**mismatch loss**, 天线跟前端的阻抗不匹配,导致能量打不进天线,这点可以通过设计和材质来解决
+
+# Types of antennas
+
+## Wire antennas
+
+
+
+## Aperture antennas
+
+
+
+## Microstrip antennas
+
+
+
+## Array antennas
+
+
+
+> [!hint]
+> 天线的目的简单来说,就是为了将能量尽可能辐射出去,同时按照你希望的方向和区间辐射。
+
+## Reflector antennas & Lens antennas
+
+
+
+
+# Radiation mechanism
+
+## Ideal antenna
+
+Radiate all the power delivered to it from the transmitter in a desired direction or directions.
+
+## How is radiation accomplished?
+
+* How are EM fields generated by the source?
+* How are EM fields contained and guided within the transmission line & antenna?
+* How are EM fields finally detached from the antenna to form a free-space wave?
+
+### How are EM fields generated by the source?
+
+
+
+* $q_v$电荷密度,$C/m^3$
+* $v_Z$电荷移动速度,$m/s$
+* $J_Z$电流密度,$A/m^2$
+$$
+A/m^2 = C/m^3 * m/s = \frac{C}{m^2 * s}
+$$
+
+导线由PEC所做时,或者高频情况,电流变成面电流
+* $J_S$变成面电流密度,$A/m^2$
+* $q_S$也变成面电荷密度,$C/m^2$
+
+但wire非常thin,当然面最终被认为为线
+
+$$
+I_Z = q_l v_Z
+$$
+我们用thin wire case来讨论
+
+对这个式子做时间微分
+$$
+\frac{dI_z}{dt} = q_l\frac{v_Z}{dt}=q_l a_z
+$$
+$$
+l\frac{dI_Z}{dt}=lq_la_Z=Qa_z
+$$
+> [!hint]
+> To create radiation, there must be **a time-varying current** or **an acceleration (or deceleration) of charge**
+>
+> -> The wire must be curved, bent, discontinuous, terminated, or truncated
+
+### How are EM fields contained and guided within the transmission line & antenna?
+
+
+
+radiation要考虑两个方面,一方面激发电场那边提供的电子的加速,另一方面时end部分的pause造成的电子的减速,这两边会有最主要的辐射;
+
+如果加速和减速之间的距离很短,形成一个pulse,会发出一个很宽频的信号;
+
+如果加减速达到间歇运动状态,会发出一个单频的辐射
+
+> [!hint]
+> 用水波去理解辐射
+>
+> 在池塘里要产生水波,你可以丟一颗石头
+>
+> Source可以产生pulse或者弦波,引起电磁振荡,induce电荷做加减速,产生时变电流,在导线里产生导波,也就是在传输线中引导的电磁波,电磁波最后会走到天线端,被辐射出去;
+>
+> pulse就像你丢了一颗石头下去,弦波就像你按照周期去丢
+
+> [!hint]
+> 根据[Maxwell's equations](Physics/Electromagnetism/Maxwells_equation.md)
+>
+> 当电磁波在导线中存在的时候,它是需要时变的电流或者说是加减速的电荷来support。在传输线里,需要source才能有场;
+>
+> 但是在解[Maxwell's equations](Physics/Electromagnetism/Maxwells_equation.md)的时候,是有一组homogeneous的解,这组解指的是,你不需要source的存在的场,这个场指的是free-space wave;
+>
+> 所以,天线本质上就是一个interface,将导线内需要source的场,变成不需要source的场,也就是free-space wave
+
+### How are EM fields finally detached from the antenna to form a free-space wave?
+
+# Radar key Parameters
+
+
+
+# Reference
+
+* [知乎 - 天线与电波传播基础知识](https://zhuanlan.zhihu.com/p/497482699)
+* [天线 in wiki](https://zh.wikipedia.org/wiki/%E5%A4%A9%E7%BA%BF)
+* [⭐⭐⭐陈士元 - 天线原理与基本参数](https://www.youtube.com/watch?v=JsVGW3z81wc&list=PLQdXflQNtKfLaGnvPLW_XVal-RaHxFN5j&index=1)
+* [天线8个核心参数解析 - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/375911768)
\ No newline at end of file
diff --git a/content/synthetic_aperture_radar_imaging/Chirp.md b/content/synthetic_aperture_radar_imaging/Chirp.md
new file mode 100644
index 000000000..b1f410d83
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/Chirp.md
@@ -0,0 +1,119 @@
+---
+title: Chirp - 啁啾
+tags:
+- basic
+- signal
+---
+
+啁啾(Chirp)是指频率随时间而改变(增加或减少)的信号。其名称来源于这种信号听起来类似鸟鸣的啾声。
+
+
+
+Chirp常常被用在sonar, radar, laser systems里。其中,为了能够测量长距离又保留时间的分辨率,雷达需要短时间的派冲波但是又要持续的发射信号,啁啾信号可以同时保留连续信号和脉冲的特性,因此被应用在雷达和声纳探测上。
+
+# Definition
+
+## 瞬时频率 (instantaneous angular frequency)
+
+
+有一信号,$x(t)=A\sin{(\phi(t))}$,其瞬时角频率为
+$$
+\omega(t)=\frac{d\phi(t)}{dt}
+$$
+经适当归一化后得到瞬时频率
+$$
+f(t)=\frac{1}{2\pi}\frac{d\phi(t)}{dt}
+$$
+
+## 啁啾度
+
+对前两式再求导,得到瞬时角频率的变化速率为**瞬时角啁啾度**(instantaneous angular chirpyness)
+
+$$
+\gamma(t)=\frac{d^2\phi(t)}{dt^2}
+$$
+类似有**瞬时(普通)啁啾度**(instantaneous ordinary chirpyness)
+
+$$
+c(t)=\frac{1}{2\pi}\gamma(t)=\frac{1}{2\pi}\frac{d^2\phi(t)}{dt^2}
+$$
+# Types
+
+## Linear
+
+
+
+啁啾的瞬时频率$f(t)$呈线性变化
+
+$$f(t)=f_0 + ct$$
+$$
+c = \frac{f_1-f_0}{T}
+$$
+
+c是一个常值
+
+Also,
+
+$$
+\phi(t)=\phi_0 + 2\pi \int_{0}^t f(\tau)d\tau =\phi_0 = 2\pi(\frac{c}{2}t^2 + f_0 t)
+$$
+
+相位为t的二次函数,从而可以继续推导出信号在time domain:
+
+$$
+x(t)=A \cos{(\phi_0 + 2\pi (\frac{c}{2}t^2 + f_0 t))}
+$$
+
+这种Linear Chirp信号也被称为二次相位讯号(**quadratic-phase signal**)
+
+## Exponential
+
+
+
+Exponential chirp,也叫geometric chirp,瞬时频率以指数变化,即$f(t_2)/f(t_1)$会是常数
+
+signal frequency:
+
+$$
+f(t)=f_0 k^t
+$$
+
+$$
+k = (\frac{f(T)}{f_0})^{\frac{1}{T}} = \text{constant}
+$$
+
+相位:
+
+$$
+\phi(t)=\phi_0 + 2\pi\int_0^t f(\tau)d\tau = \phi_0 + 2\pi f_0 (\frac{k^t - 1}{\ln(k)})
+$$
+
+time-domain:
+
+$$
+x(t) = \sin{[\phi_0 + 2\pi f_0(\frac{k^t - 1}{\ln(k)})]}
+$$
+
+## Hyperbolic
+
+双曲线线性调频用于雷达应用,因为它们在被多普勒效应([Doppler Effect](Physics/Wave/Doppler_Effect.md))扭曲后显示出最大的匹配滤波器([Matched filter](https://en.wikipedia.org/wiki/Matched_filter))响应。
+
+signal frequency:
+
+$$
+f(t) = \frac{f_0 f_1 T}{(f_0 - f_1)t + f_1T}
+$$
+
+Phase:
+
+$$
+\phi(t) = \phi_0 + 2\pi \int_0^t f(\tau)d\tau = \phi_0 + 2\pi \frac{-f_0f_1 T}{f_1 - f_0}\ln(1 - \frac{f_1-f_0}{f_1 T}t)
+$$
+
+
+time-domain:
+
+$$
+x(t) = \sin{[\phi_0 + 2\pi \frac{-f_0f_1 T}{f_1 - f_0}\ln(1 - \frac{f_1-f_0}{f_1 T}t)]}
+$$
+
diff --git a/content/synthetic_aperture_radar_imaging/Radiometric_Calibration.md b/content/synthetic_aperture_radar_imaging/Radiometric_Calibration.md
new file mode 100644
index 000000000..6af00caef
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/Radiometric_Calibration.md
@@ -0,0 +1,29 @@
+---
+title: Radiometric Calibration - 辐射校准
+tags:
+- SAR
+- basic
+---
+
+# Overview
+SAR 校准旨在提供其像素值可与场景中的雷达反向散射直接相关的影像。虽然未校准的 SAR 影像足以用于定性用途,但校准后的 SAR 影像对于定量使用 SAR 数据而言仍至关重要。
+
+生成级别 1 影像的典型 SAR 数据处理不包括辐射校正,且仍然存在明显的辐射偏差。因此有必要对 SAR 影像应用辐射校正,*使影像的像素值真正能够反映反射表面的雷达反向散射情况*。在比较由不同的传感器采集的 SAR 影像时,或比较由同一传感器在不同时间、不同模式下采集的(或由不同处理器处理的)SAR 影像时,都需要进行辐射校正。
+
+## Types
+* **Sigma nought** - 用于校准地面上单位面积内返回到天线的反向散射,并与地面范围相关。影像经过校准,因此可以直接与相同或不同传感器收集的不同雷达影像进行比较。科学家倾向于使用 sigma naught 来解释表面散射、表面反射以及表面属性。
+ * *Scattering coefficient*, or the conventional measure of the strength of radar signals reflected by a distributed scatterer, usually expressed in dB. It is a *normalised dimensionless number*, comparing the strength observed to that expected from an area of one square meter. Sigma nought is defined with respect to the nominally horizontal plane, and in general has a significant variation with **incidence angle**, **wavelength**, and **polarisation**, as well as with **properties of the scattering surface itself**.
+* **Beta nought** - 可生成包含雷达亮度系数的数据集(雷达亮度系数是天线发射功率与接收功率之比)。它与倾斜范围有关,且无维度。
+* **Gamma** - 通常在校准天线时使用。因为每个范围像元与卫星的距离均相等,所以近距范围和远距范围的亮度均相等,这有助于确定输出数据集中的天线方向图。
+* **None** - 不做校正
+
+
+
+# Reference
+
+* [Sentinel-1 Radiometric Calibration—ArcMap | Documentation (arcgis.com)](https://desktop.arcgis.com/en/arcmap/latest/manage-data/raster-and-images/sentinel-1-radiometric-calibration.htm)
+
+* [Urban objects detection from C-band synthetic aperture radar (SAR) satellite images through simulating filter properties | Scientific Reports (nature.com)](https://www.nature.com/articles/s41598-021-85121-9)
+
+* [✨✨✨User Guides - Sentinel-1 SAR - Definitions - Sentinel Online - Sentinel Online (esa.int)](https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/definitions)
+
diff --git a/content/synthetic_aperture_radar_imaging/SAR_Explained.md b/content/synthetic_aperture_radar_imaging/SAR_Explained.md
new file mode 100644
index 000000000..fde82039e
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/SAR_Explained.md
@@ -0,0 +1,127 @@
+---
+title: Synthetic Aperture Radar (SAR) Explained
+tags:
+- SAR
+- basic
+---
+
+# Radar Basic Concepts
+
+## Down Looking vs. Side Looking
+
+
+
+Down Looking不能区分距离一样的a,b点,一般只用于monitoring of air and naval traffic
+
+## Simplified explanation of Radar working & What is SAR
+The radar consists fundamentally of *a transmitter*, *a receiver*, *an antenna* and *an electronic system* to process and record the data.
+
+The transmitter generates successive short bursts or pulses of microwave at regular intervals which are focused by the antenna into a beam. Radar beam illuminates the surface **obliquely** at a right angle to the motion of the platform. The antenna receives a portion of the transmitted energy reflected or it's known as backscattered from various objects within the illuminated beam by measuring this time delay between the transmission of a pulse and the reception of the backscattered echo from different targets. Their distance from the radar and therefore their location can be determined as the sensor platform *moves forward* recording and processing of the backscattered signals builds up a 2-dimensional image of the surface.
+
+
+> [!important]
+> Important
+> The along track **resolution** is determined by the beam width which is *inversely proportional to the antenna length*, also known as the **aperture**, which means that longer antenna or a longer aperture will produce a narrow beam and a finer resolution.
+> Long antenna $\leftrightarrow$ Small beam $\leftrightarrow$ Long aperture $\leftrightarrow$ Better image resolution
+
+
+
+### Why SAR
+介于实际情况下的物理空间中,雷达天线的大小是限的,可以通过雷达的移动去模拟长天线情况下的雷达,也就是活得更大的aperture,这项被叫做SAR。目的是在于使用*comparatively small physical antennas*去获得*high resolution images*
+
+---
+
+
+
+* Radar can measure *amplitude* and *phase*
+* Radar can only measure part of echoes.
+* The strength of the reflected echo is the backscattering coefficient ([sigma nought](synthetic_aperture_radar_imaging/Radiometric_Calibration.md))and is expressed in [decibels(dB)](signal_processing/basic_knowledge/concept/what_is_dB.md)
+
+## Radar Resolution
+
+### Detail geometry
+
+
+**Fig** *Geometry of a side-looking real aperture radar. (SLAR)*
+
+side-looking的雷达被分为two types —— real aperture radar(*SLAR or SLR*, SL for side-looking)和synthetic aperture radar(SAR)
+
+如上图所示,雷达发出的pulse被[antenna聚焦](synthetic_aperture_radar_imaging/Antenna.md)在一个narrow的area里,然后scatter后在不同和的时间再被receiver接收
+
+### Resolution
+
+当我们谈SAR的分辨率时,我们要知道有四种operating modes对于SAR而言。
+
+
+
+* Stripmap SAR
+* Spotlight SAR
+* Circular SAR
+* Scan SAR
+
+其中Stripmap SAR, Spotlight SAR, Circular SAR这三种最为常用
+
+
+
+Stripmap SAR是将antenna固定在platform,以straight line方式移动并连续接发pulse,它的优势是可以cover large area。
+
+
+
+Spotlight SAR天线不断移动以照射同一区域,它的特点是high-resolution image,因为它从不同的角度收集同一区域的data
+
+
+
+Circular SAR通过circular trajectory窥探同一片area,它跟spotlight SAR很像,区别在于Spotlight mode里antenna是不动的,只有平台在移动,而在circular mode里,antenna也在移动,来收集$360^\circ$信息,circular SAR的分辨率计算时,认为反射是$360^\circ$各向同性反射,所以是理论分辨率。
+
+我在UWB radar探测烧伤的技术中将采用Spotlight SAR
+
+
+#### Range Resolution & Azimuth Resolution
+
+
+
+这是一张可以快速check概念的图
+
+Table. *Range and azimuth resolution*
+| | Range Resolution | Azimuth Resolution |
+| ------------- | -------------------------------------------- | ------------------------------------------------------ |
+| Stripmap SAR | $\Delta_r = \frac{c\pi}{2\omega_0}$ | $\Delta_a = \frac{D_y}{2}$ |
+| Spotlight SAR | $\Delta_r = \frac{c\pi}{2\omega_0}$ | $\Delta_a=\frac{r_n\lambda_c}{4L \cos \theta_n(0)}$ |
+| Circular SAR | $\Delta_r = \frac{\pi}{\rho_max - \rho_min}$ | $\Delta_a=\frac{\pi}{2k_c \cos{\theta_z}\sin{\phi_0}}$ |
+
+* $\omega_0$ radar signal half-bandwidth in radians
+* $D_y$ the diameter of the radar in azimuth domain
+* $r_n$ the target radical distance from the center of aperture
+* $\lambda_c = \frac{2c\pi}{\omega_c}$ the wavelength at carrier fast-time frequency
+* $\omega_c$ the central frequency
+* $L$ half-size of the aperture
+* $\theta_n(0)$ the aspect angle of the $n$th target when radar is at (0, 0)
+* $\rho_{max}$ and $\rho_{min}$ the maximum and minimum polar radius in spatial frequency domain for the support of a target at the center of the spotlight area
+* $k_c$ the wavenumber at carrier frequency
+* $\theta_z$ the average depression angle of the target area
+* $\phi_0$ the polar angle in spatial frequency domain
+
+## Radar Image Format
+
+
+
+## Radar Key Parameters
+* Wave Length
+* Polarization
+* Incidence Angle
+
+### Wave Length
+
+
+
+雷达数据的空间分辨率与传感器波长与传感器天线长度之比直接相关。 对于给定的波长,天线越长,空间分辨率越高。 对于以大约 5 cm 波长运行的太空卫星(C 波段雷达),为了获得 10 m 的空间分辨率,您需要一个大约 4,250 m 长的雷达天线。 (超过 47 个足球场!)
+
+
+
+# Reference
+
+* [Theory of Synthetic Aperture Radar (uzh.ch)](https://www.geo.uzh.ch/~fpaul/sar_theory.html)
+* ***Sentinel-1** is a famous SAR, you can find almost every definitions* of SAR in this page:
+[User Guides - Sentinel-1 SAR - Definitions - Sentinel Online - Sentinel Online (esa.int)](https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/definitions)
+* [SAR(Synthetic Aperture Radar)基础(一) - 知乎 (评论区说这个有错)](https://zhuanlan.zhihu.com/p/98053986)
+* [A Review of Synthetic-Aperture Radar Image Formation Algorithms and Implementations: A Computational Perspective]([Remote Sensing | Free Full-Text | A Review of Synthetic-Aperture Radar Image Formation Algorithms and Implementations: A Computational Perspective (mdpi.com)](https://www.mdpi.com/2072-4292/14/5/1258))
diff --git a/content/synthetic_aperture_radar_imaging/SAR_Imaging_Algorithm.md b/content/synthetic_aperture_radar_imaging/SAR_Imaging_Algorithm.md
new file mode 100644
index 000000000..aaf91d12d
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/SAR_Imaging_Algorithm.md
@@ -0,0 +1,134 @@
+---
+title: SAR Imaging Algorithm review in 2022
+tags:
+- SAR
+- basic
+- algorithm
+- state-of-art
+---
+
+
+# Overview
+
+* Backprojection
+* Matched-filter
+* Polar format
+* Range-Doppler
+* Chirp scaling algorithms
+
+
+# What is SAR processing?
+
+
+## Born approximation
+
+SAR 处理算法将场景建模为一组离散的点目标,其分散的 EM 场不会相互影响。
+
+* 无多次反弹
+* 目标处的电场仅来自入射波,而不来自周围的散射体
+* 目标模型是线性的,因为点目标 P1 和点目标 P2 的散射响应被建模为点目标 P1 本身的响应 + 点目标 P2 本身的响应
+* 可以应用**叠加原理(principle of superposition)**
+
+
+
+## 信号建模
+
+
+SAR成像是对一个区域的散射特性进行成像,这个区域的地形一般比较复杂,区域内不同位置处的物体散射特性各不相同,最后SAR接收的是探测区域内所有物体后向散射信号的叠加,整个探测区域散射的回波信号模型非常复杂。直接构造整个探测区域的散射信号模型十分困难,也没有必要。为了简化信号模型,信号模型的建立运用了两个离散化:
+
+* 探测区域的离散化;
+* 平台飞行的离散化;
+
+### 探测区域离散化
+
+**将探测区域认为是若干散射点的集合**,由此对区域回波信号模型的建立转化为对这些散射点回波信号模型的建立。这样只需构建任意散射点的回波信号模型即可表示整个探测区域的回波信号模型。该离散化的准则是:离散间隔内的物体散射特性基本不变。
+
+### 平台飞行离散化
+
+**将平台的飞行过程认为是一个“走停”模式**,即在一个脉冲时间(脉冲重复周期)内,平台是“停”(静止)的状态,平台发射一个脉冲信号,并在该位置处接收该脉冲照射目标的回波信号;在下一个脉冲时间内,平台“走”(瞬移)到另一个位置(按照原来匀速运动应该走到的位置处),并在下一个位置重复上一个脉冲时间内平台的操作。该离散化的准则是:电磁波传播速度远大于平台速度,即SAR一次发射、接收过程中,雷达的位置基本不变。
+
+---
+
+
+
+
+
+如图,针对红点目标,SAR从A点开始照射到P点最接近目标,直到B点离开红点离开。
+
+假设平台$t$时刻飞行到红点位置,雷达发射脉冲信号$s(\tau)$,此时接收的回波信号信息为:
+
+
+$$
+r(\tau,t) = \sigma(R_0, A_0) s(\tau - \frac{2R(t)}{c})\omega_a(\frac{t - t_p}{T_{syn}})
+$$
+
+
+* $\sigma(R_0, A_0)$表示$(R_0, A_0)$处目标的散射面积
+* $T_{syn}$表示合成孔径的时长
+* $\omega_a(\cdot)$理想情况可以认为是矩性窗,实际上是由实孔径天线的方向图构成;考虑到信号往返,$\omega_a(\cdot)$函数为天线方向图的平方。
+
+同时,有:
+
+$$
+R(t) = \sqrt{R_0^2 + v^2(t-t_p)^2}
+$$
+
+从图示不难发现,与红点目标相比,距离向等距的黑点目标多普勒历程一致,只是对应的方位向时延不一样,反映在表达式上,即距离目标最短的时刻$t_p$不同。对接收的回波信号进一步化简可得:
+
+$$
+r(\tau, t) = \{s(\tau)w_a(\frac{t}{T_{syn}})\} \bigotimes h(\tau, t)
+$$
+
+$$
+h(\tau, t) = \sigma(R_0, A_0)\delta(\tau-\frac{2R(t)}{c}, t-t_p)
+$$
+
+将SAR(信号发射到接收的过程)看成一个系统,$h(\tau, t)$为对应的系统函数,系统函数包含目标位置处散射面积$\sigma(R_0, A_0)$和重建函数$\delta(\tau-\frac{2R(t)}{c}, t-t_p)$。
+
+SAR成像问题等效为:根据发射信号从回波信号中反卷积出系统函数$h(\tau, t)$
+
+同时,系统函数$h(\tau, t)$中的重建函数$\delta(\tau-\frac{2R(t)}{c}, t-t_p)$的快时间维存在慢时间维的耦合项,为此SAR成像算法一个关键的步骤是去除这个耦合项,称为距离徙动校正,将重建函数矫正为$\delta(\tau-\frac{2R}{c}, t-t_p)$,这时候可以分别对快时间维$\tau$和慢时间维$t$的信号做脉冲压缩处理,得到SAR图像
+
+上述是SAR回波信号模型的建立过程以及对所得回波信号模型的简单分析,在建立信号模型的过程中,运用了雷达领域经常用到了两个概念,*慢时间是对脉冲间时间的标记*,即慢时间表示发射的是第几个脉冲信号,所以慢时间本身是离散的,离散间隔为脉冲重复周期;*快时间是对脉冲内时间的标记*,即快时间显示的是任意一个脉冲内的时刻,相比慢时间,快时间是连续的,需要通过信号的采样来离散。
+
+
+### 信号模型的四域表示
+
+
+
+# Range-Doppler Algorithm (RDA)
+
+Range-Doppler Algorithm是SAR成像的第一个算法,在1970年代被developed出来,用来生成stripmap的SAR。Range-Doppler Algorithm利用block-processing处理,在距离和方位角中使用频域运算。
+
+步骤如下:
+
+
+
+## Range Compression
+
+
+
+距离参考函数是一系列复数,表示天线发射的原始啁啾信号(original [chirp](synthetic_aperture_radar_imaging/Chirp.md))。
+
+天线发射的原始线性调频信号(**linear-frequency chirp**)是一种线性调频连续波信号,它的频率随着时间线性变化,形成一种锯齿状的波形。这种信号可以用数学公式表示为:
+
+$$ s(t) = \cos\left(2\pi\left(f_c t + \frac{B}{T} t^2\right)\right) $$
+
+其中,$f_c$是信号的中心频率,$B$是信号的带宽,$T$是信号的持续时间。这种信号可以用一个本地振荡器(LO)来生成,然后通过一个功率放大器来放大,并从天线发射出去。
+
+## Azimuth Compression
+
+
+
+
+
+# Reference
+
+* [A Review of Synthetic-Aperture Radar Image Formation Algorithms and Implementations: A Computational Perspective]([Remote Sensing | Free Full-Text | A Review of Synthetic-Aperture Radar Image Formation Algorithms and Implementations: A Computational Perspective (mdpi.com)](https://www.mdpi.com/2072-4292/14/5/1258))
+* [Range Doppler Algorithm - University of Kansas](https://people.eecs.ku.edu/~callen58/826/826_SAR_Processing_Algorithms_Overview-F15.pptx)
+* [距离多普勒算法(RDA)-SAR成像算法系列(三)-【杨(_> <_)】的博客-CSDN博客 🚧这个人的博客讲的真不错🚧](https://blog.csdn.net/yjh_2019/article/details/123772486?spm=1001.2014.3001.5502)
+
diff --git a/content/synthetic_aperture_radar_imaging/SAR_MOC.md b/content/synthetic_aperture_radar_imaging/SAR_MOC.md
new file mode 100644
index 000000000..5abdaf19b
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/SAR_MOC.md
@@ -0,0 +1,16 @@
+---
+title: "Synthetic Aperture Radar (SAR) Imaging - MOC"
+tags:
+- SAR
+- MOC
+---
+
+
+# Antenna
+
+* [Antenna](synthetic_aperture_radar_imaging/Antenna.md)
+
+# SAR
+
+* [[synthetic_aperture_radar_imaging/SAR_Explained|SAR Explained]]
+* [SAR Imaging Algorithm review in 2022](synthetic_aperture_radar_imaging/SAR_Imaging_Algorithm.md)
\ No newline at end of file
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Linear-chirp.svg b/content/synthetic_aperture_radar_imaging/attachments/Linear-chirp.svg
new file mode 100644
index 000000000..b47205865
--- /dev/null
+++ b/content/synthetic_aperture_radar_imaging/attachments/Linear-chirp.svg
@@ -0,0 +1,98 @@
+
+
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320150424.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320150424.png
new file mode 100644
index 000000000..a94009174
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320150424.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320163208.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320163208.png
new file mode 100644
index 000000000..cd9761269
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320163208.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320163240.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320163240.png
new file mode 100644
index 000000000..db4499d66
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230320163240.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330153007.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330153007.png
new file mode 100644
index 000000000..1e4e0eac5
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330153007.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330153450.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330153450.png
new file mode 100644
index 000000000..d24491b16
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330153450.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330160535.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330160535.png
new file mode 100644
index 000000000..bf9879341
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330160535.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330163300.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330163300.png
new file mode 100644
index 000000000..054eac17d
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230330163300.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404162148.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404162148.png
new file mode 100644
index 000000000..d0ccfc134
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404162148.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404162347.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404162347.png
new file mode 100644
index 000000000..125d891b7
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404162347.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404163712.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404163712.png
new file mode 100644
index 000000000..e2bcc902a
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404163712.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404165239.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404165239.png
new file mode 100644
index 000000000..eec933c3c
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230404165239.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105253.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105253.png
new file mode 100644
index 000000000..17288469e
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105253.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105310.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105310.png
new file mode 100644
index 000000000..18df546eb
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105310.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105548.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105548.png
new file mode 100644
index 000000000..1c2f156e6
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410105548.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410111719.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410111719.png
new file mode 100644
index 000000000..a17aaddd2
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410111719.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410112252.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410112252.png
new file mode 100644
index 000000000..79c4d7e05
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410112252.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410113039.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410113039.png
new file mode 100644
index 000000000..46c738ffc
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230410113039.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230411105457.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230411105457.png
new file mode 100644
index 000000000..15bba1e2b
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230411105457.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105410.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105410.png
new file mode 100644
index 000000000..fb733f60a
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105410.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105501.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105501.png
new file mode 100644
index 000000000..88878ef81
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105501.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105703.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105703.png
new file mode 100644
index 000000000..feef34aa5
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414105703.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414110025.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414110025.png
new file mode 100644
index 000000000..ca0ac0f19
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414110025.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414111317.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414111317.png
new file mode 100644
index 000000000..f4c133bd4
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414111317.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414111329.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414111329.png
new file mode 100644
index 000000000..ce17aeccd
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230414111329.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230417110036.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230417110036.png
new file mode 100644
index 000000000..27185c131
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230417110036.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418102226.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418102226.png
new file mode 100644
index 000000000..1e94cc57e
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418102226.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418103204.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418103204.png
new file mode 100644
index 000000000..bab9dbb56
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418103204.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418103211.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418103211.png
new file mode 100644
index 000000000..bab9dbb56
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418103211.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418104536.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418104536.png
new file mode 100644
index 000000000..743de3dce
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418104536.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418110700.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418110700.png
new file mode 100644
index 000000000..667912d64
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418110700.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418111708.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418111708.png
new file mode 100644
index 000000000..5fc990c08
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418111708.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418162215.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418162215.png
new file mode 100644
index 000000000..730fc1d3f
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418162215.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418162216.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418162216.png
new file mode 100644
index 000000000..730fc1d3f
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418162216.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418165114.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418165114.png
new file mode 100644
index 000000000..c89c1818a
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230418165114.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230419111635.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230419111635.png
new file mode 100644
index 000000000..6c353d07a
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230419111635.png differ
diff --git a/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230509140819.png b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230509140819.png
new file mode 100644
index 000000000..8ebbe2daf
Binary files /dev/null and b/content/synthetic_aperture_radar_imaging/attachments/Pasted image 20230509140819.png differ
diff --git a/content/tmp_script/Common_Issues_in_DSP_Homework_Script.md b/content/tmp_script/Common_Issues_in_DSP_Homework_Script.md
new file mode 100644
index 000000000..5a7a8ef27
--- /dev/null
+++ b/content/tmp_script/Common_Issues_in_DSP_Homework_Script.md
@@ -0,0 +1,48 @@
+---
+title: Common Issues in DSP Homework
+tags:
+ - homework
+---
+
+# Issue 1- Compute the unit-pulse response h[n]
+
+## h[-1]?
+
+When n is less than zero, it means at a past point in time, i.e., before the current point in time. In this case, **if the system is causal, the unit shock response h[n] should be zero for n less than zero**. This is because the system has not been subjected to any inputs at past points in time, and therefore its response should be zero.
+
+So,
+$$
+h[n < 0] = 0
+$$
+## Categorized discussion
+
+
+
+# Issue 2 - Using recur function to compute the approximation to y(t), in differential equation.
+
+
+## Example
+
+$$
+\frac{d^2y(t)}{dt^2} + \frac{dy(t)}{dt}+4.25y(t) = 0, y(0) = 2, \dot{y}(0)=1
+$$
+
+$$
+y[n+2] + y[n+1](T-2) +y[n](1-T+T^2 4.25) = 0
+$$
+
+```matlab
+function y = recur(a, b, n, x, x0, y0)
+ N = length(a);
+ M = length(b)-1;
+ y = [y0 zeros(1,length(n))];
+ x = [x0 x]; a1 = a(length(a):-1:1); % reverses the elements in a
+ b1 = b(length(b):-1:1);
+ for i=N+1:N+length(n)
+ y(i) = -a1*y(i-N:i-1)' + b1*x(i-N:i-N+M)';
+ end
+ y = y(N+1:N+length(n));
+end
+```
+
+
diff --git a/content/tmp_script/attachments/Pasted image 20230612112839.png b/content/tmp_script/attachments/Pasted image 20230612112839.png
new file mode 100644
index 000000000..74f0827cb
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20230612112839.png differ
diff --git a/content/tmp_script/attachments/Pasted image 20230612113043.png b/content/tmp_script/attachments/Pasted image 20230612113043.png
new file mode 100644
index 000000000..196e17b01
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20230612113043.png differ
diff --git a/content/tmp_script/attachments/Pasted image 20230612113452.png b/content/tmp_script/attachments/Pasted image 20230612113452.png
new file mode 100644
index 000000000..f8b784760
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20230612113452.png differ
diff --git a/content/tmp_script/attachments/Pasted image 20230612113454.png b/content/tmp_script/attachments/Pasted image 20230612113454.png
new file mode 100644
index 000000000..f8b784760
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20230612113454.png differ
diff --git a/content/tmp_script/attachments/Pasted image 20230612113606.png b/content/tmp_script/attachments/Pasted image 20230612113606.png
new file mode 100644
index 000000000..012b43e0d
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20230612113606.png differ
diff --git a/content/tmp_script/attachments/Pasted image 20231112222725.png b/content/tmp_script/attachments/Pasted image 20231112222725.png
new file mode 100644
index 000000000..06568e704
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20231112222725.png differ
diff --git a/content/tmp_script/attachments/Pasted image 20231112222852.png b/content/tmp_script/attachments/Pasted image 20231112222852.png
new file mode 100644
index 000000000..a07b647ee
Binary files /dev/null and b/content/tmp_script/attachments/Pasted image 20231112222852.png differ
diff --git a/content/tmp_script/attachments/三一面试.pdf b/content/tmp_script/attachments/三一面试.pdf
new file mode 100644
index 000000000..823b42109
Binary files /dev/null and b/content/tmp_script/attachments/三一面试.pdf differ
diff --git a/content/tmp_script/interview_31_ans.md b/content/tmp_script/interview_31_ans.md
new file mode 100644
index 000000000..40f4110ef
--- /dev/null
+++ b/content/tmp_script/interview_31_ans.md
@@ -0,0 +1,8 @@
+---
+title: 2023三位一体 - 生医工面试真题
+tags:
+- tmp
+---
+
+
+[三一面试.pdf](tmp_script/attachments/三一面试.pdf)
\ No newline at end of file
diff --git a/content/tmp_script/prefix_sum.md b/content/tmp_script/prefix_sum.md
new file mode 100644
index 000000000..6a27decc2
--- /dev/null
+++ b/content/tmp_script/prefix_sum.md
@@ -0,0 +1,15 @@
+---
+title: Prefix Sum
+tags:
+- basic
+---
+
+假设我们有一个长度为n的数组arr,**前缀和**数组prefixSum的定义如下:
+
+```python
+prefixSum[0] = arr[0]
+prefixSum[1] = arr[0] + arr[1]
+prefixSum[2] = arr[0] + arr[1] + arr[2]
+...
+prefixSum[i] = arr[0] + arr[1] + ... + arr[i]
+```
diff --git a/content/toolkit/git/git_MOC.md b/content/toolkit/git/git_MOC.md
new file mode 100644
index 000000000..4baa95429
--- /dev/null
+++ b/content/toolkit/git/git_MOC.md
@@ -0,0 +1,8 @@
+---
+title: Git MOC
+tags:
+- git
+- basic
+---
+
+* [GitHub Actions](toolkit/git/github_actions.md)
\ No newline at end of file
diff --git a/content/toolkit/git/github_actions.md b/content/toolkit/git/github_actions.md
new file mode 100644
index 000000000..5e53fd588
--- /dev/null
+++ b/content/toolkit/git/github_actions.md
@@ -0,0 +1,10 @@
+---
+title: GitHub Actions
+tags:
+- git
+- git-action
+---
+
+# Reference
+
+* [GitHub Actions by Example](https://www.actionsbyexample.com/)
\ No newline at end of file
diff --git a/content/warehouse/IELTS.md b/content/warehouse/IELTS.md
new file mode 100644
index 000000000..52ef08ce2
--- /dev/null
+++ b/content/warehouse/IELTS.md
@@ -0,0 +1,8 @@
+---
+title: IELTS material
+tags:
+ - material
+ - IELTS
+---
+
+[IELTS Reading Material.zip](https://pinktalk.online/warehouse/attachments/reading_3_4_ieltsonlinetests.com_reading_3_4.zip)
\ No newline at end of file
diff --git a/content/warehouse/NUS_Transcript.pdf b/content/warehouse/NUS_Transcript.pdf
new file mode 100644
index 000000000..62567f29b
Binary files /dev/null and b/content/warehouse/NUS_Transcript.pdf differ
diff --git a/content/warehouse/attachments/Pasted image 20230404150745.png b/content/warehouse/attachments/Pasted image 20230404150745.png
new file mode 100644
index 000000000..4cc1c3f4a
Binary files /dev/null and b/content/warehouse/attachments/Pasted image 20230404150745.png differ
diff --git a/content/warehouse/attachments/reading_3_4_ieltsonlinetests.com_reading_3_4.zip b/content/warehouse/attachments/reading_3_4_ieltsonlinetests.com_reading_3_4.zip
new file mode 100644
index 000000000..acdcbb7f6
Binary files /dev/null and b/content/warehouse/attachments/reading_3_4_ieltsonlinetests.com_reading_3_4.zip differ
diff --git a/content/warehouse/dampers_keeping_a_door_from_slamming shut.md b/content/warehouse/dampers_keeping_a_door_from_slamming shut.md
new file mode 100644
index 000000000..14de74a80
--- /dev/null
+++ b/content/warehouse/dampers_keeping_a_door_from_slamming shut.md
@@ -0,0 +1,5 @@
+---
+title: Dampers keeping a door from slamming shut
+---
+
+
\ No newline at end of file
diff --git a/content/warehouse/img/skills/anaconda.png b/content/warehouse/img/skills/anaconda.png
new file mode 100644
index 000000000..b61b4542a
Binary files /dev/null and b/content/warehouse/img/skills/anaconda.png differ
diff --git a/content/warehouse/img/skills/bash.png b/content/warehouse/img/skills/bash.png
new file mode 100644
index 000000000..93e5d72a4
Binary files /dev/null and b/content/warehouse/img/skills/bash.png differ
diff --git a/content/warehouse/img/skills/blogdown.png b/content/warehouse/img/skills/blogdown.png
new file mode 100644
index 000000000..53e07f55a
Binary files /dev/null and b/content/warehouse/img/skills/blogdown.png differ
diff --git a/content/warehouse/img/skills/c++.png b/content/warehouse/img/skills/c++.png
new file mode 100644
index 000000000..6662f0ea3
Binary files /dev/null and b/content/warehouse/img/skills/c++.png differ
diff --git a/content/warehouse/img/skills/c.webp b/content/warehouse/img/skills/c.webp
new file mode 100644
index 000000000..8af487400
Binary files /dev/null and b/content/warehouse/img/skills/c.webp differ
diff --git a/content/warehouse/img/skills/cmake.webp b/content/warehouse/img/skills/cmake.webp
new file mode 100644
index 000000000..421a02f0d
Binary files /dev/null and b/content/warehouse/img/skills/cmake.webp differ
diff --git a/content/warehouse/img/skills/css3.svg b/content/warehouse/img/skills/css3.svg
new file mode 100644
index 000000000..550e2e098
--- /dev/null
+++ b/content/warehouse/img/skills/css3.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/warehouse/img/skills/django.webp b/content/warehouse/img/skills/django.webp
new file mode 100644
index 000000000..65ad0635f
Binary files /dev/null and b/content/warehouse/img/skills/django.webp differ
diff --git a/content/warehouse/img/skills/eth.png b/content/warehouse/img/skills/eth.png
new file mode 100644
index 000000000..33065b397
Binary files /dev/null and b/content/warehouse/img/skills/eth.png differ
diff --git a/content/warehouse/img/skills/html5.svg b/content/warehouse/img/skills/html5.svg
new file mode 100644
index 000000000..dd81b76a9
--- /dev/null
+++ b/content/warehouse/img/skills/html5.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/warehouse/img/skills/javascript.svg b/content/warehouse/img/skills/javascript.svg
new file mode 100644
index 000000000..79aa7d741
--- /dev/null
+++ b/content/warehouse/img/skills/javascript.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/warehouse/img/skills/jekyll.webp b/content/warehouse/img/skills/jekyll.webp
new file mode 100644
index 000000000..6712f5cc7
Binary files /dev/null and b/content/warehouse/img/skills/jekyll.webp differ
diff --git a/content/warehouse/img/skills/latex.png b/content/warehouse/img/skills/latex.png
new file mode 100644
index 000000000..3b6a80341
Binary files /dev/null and b/content/warehouse/img/skills/latex.png differ
diff --git a/content/warehouse/img/skills/linux.png b/content/warehouse/img/skills/linux.png
new file mode 100644
index 000000000..518b5c79b
Binary files /dev/null and b/content/warehouse/img/skills/linux.png differ
diff --git a/content/warehouse/img/skills/matlab.png b/content/warehouse/img/skills/matlab.png
new file mode 100644
index 000000000..b5a543591
Binary files /dev/null and b/content/warehouse/img/skills/matlab.png differ
diff --git a/content/warehouse/img/skills/numpy.png b/content/warehouse/img/skills/numpy.png
new file mode 100644
index 000000000..96d66c2d5
Binary files /dev/null and b/content/warehouse/img/skills/numpy.png differ
diff --git a/content/warehouse/img/skills/opencv.png b/content/warehouse/img/skills/opencv.png
new file mode 100644
index 000000000..efdd96ba4
Binary files /dev/null and b/content/warehouse/img/skills/opencv.png differ
diff --git a/content/warehouse/img/skills/pandas.png b/content/warehouse/img/skills/pandas.png
new file mode 100644
index 000000000..462291f37
Binary files /dev/null and b/content/warehouse/img/skills/pandas.png differ
diff --git a/content/warehouse/img/skills/python.png b/content/warehouse/img/skills/python.png
new file mode 100644
index 000000000..1c85dfd37
Binary files /dev/null and b/content/warehouse/img/skills/python.png differ
diff --git a/content/warehouse/img/skills/pytorch.png b/content/warehouse/img/skills/pytorch.png
new file mode 100644
index 000000000..bad49bf30
Binary files /dev/null and b/content/warehouse/img/skills/pytorch.png differ
diff --git a/content/warehouse/img/skills/r.png b/content/warehouse/img/skills/r.png
new file mode 100644
index 000000000..be48e3074
Binary files /dev/null and b/content/warehouse/img/skills/r.png differ
diff --git a/content/warehouse/img/skills/rstudio.png b/content/warehouse/img/skills/rstudio.png
new file mode 100644
index 000000000..385c98bf1
Binary files /dev/null and b/content/warehouse/img/skills/rstudio.png differ
diff --git a/content/warehouse/img/skills/ubuntu.png b/content/warehouse/img/skills/ubuntu.png
new file mode 100644
index 000000000..83e6f6224
Binary files /dev/null and b/content/warehouse/img/skills/ubuntu.png differ
diff --git a/content/warehouse/img/skills/vim.png b/content/warehouse/img/skills/vim.png
new file mode 100644
index 000000000..37f8a4198
Binary files /dev/null and b/content/warehouse/img/skills/vim.png differ
diff --git a/content/warehouse/img/skills/vscode.png b/content/warehouse/img/skills/vscode.png
new file mode 100644
index 000000000..f9740919e
Binary files /dev/null and b/content/warehouse/img/skills/vscode.png differ
diff --git a/content/warehouse/img/skills/vuejs.svg b/content/warehouse/img/skills/vuejs.svg
new file mode 100644
index 000000000..27afad0c4
--- /dev/null
+++ b/content/warehouse/img/skills/vuejs.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/文学/article/article_MOC.md b/content/文学/article/article_MOC.md
new file mode 100644
index 000000000..f08b9ee8c
--- /dev/null
+++ b/content/文学/article/article_MOC.md
@@ -0,0 +1,7 @@
+---
+title: 🎏我的文字
+tags:
+ - article
+ - 文学
+ - MOC
+---
diff --git a/content/文学/attachments/Pasted image 20230321142115.png b/content/文学/attachments/Pasted image 20230321142115.png
new file mode 100644
index 000000000..9f35fdb41
Binary files /dev/null and b/content/文学/attachments/Pasted image 20230321142115.png differ
diff --git a/content/文学/attachments/Pasted image 20230321143300.png b/content/文学/attachments/Pasted image 20230321143300.png
new file mode 100644
index 000000000..99aadd560
Binary files /dev/null and b/content/文学/attachments/Pasted image 20230321143300.png differ
diff --git a/content/文学/poem/2018.md b/content/文学/poem/2018.md
new file mode 100644
index 000000000..b767f8951
--- /dev/null
+++ b/content/文学/poem/2018.md
@@ -0,0 +1,123 @@
+---
+title: Poem in 2018
+tags:
+- poem
+---
+
+
+
+> [!quote]
+> 短诗四
+>
+> 无人寻找惆怅
+> 但惆怅裹挟了我
+>
+> 孤独,
+> 是没有反义词的
+> 至多,也只算是近义词
+> 譬如热闹
+> 背后
+> 是无数个孤独
+>
+> 我咀嚼快乐
+> 却尝到痛苦
+
+
+
+
+---
+
+> [!quote]
+> 短诗三
+>
+> 你或许已经看穿了我的虚伪
+> 但你不知道
+> 我有两层面具
+> 一层给你看
+> 一层给别人看
+
+
+
+
+---
+
+> [!quote]
+> 剑客
+>
+> 剑客本来不喝酒
+> 只是他的心和剑
+> 一样锋利
+>
+> 江湖的剑客
+> 或许在决斗中,逢凶化吉
+> 可他始终赢不了
+> 推杯换盏的酒局
+>
+> "十年磨一剑"
+> 是贾岛的剑客
+>
+> 十年,
+> 对剑客太长
+> 他早就一个人
+> 在无人懂他的客栈里
+> 醉了
+>
+> 他的心,和他的剑
+> 在酒中,
+> 绣了
+
+
+ 
+
+
+---
+
+> [!quote]
+> 在深巷开一家店
+>
+> 在深巷开一家店
+> 外观奇特的它
+> 看上去那么不同
+> 用舶来的文字做店名
+> 充满异域的味道
+>
+> 用昂贵的书籍装饰角落
+> 请名叫津宁的画家替我
+> 雕绘走廊
+>
+> 雇流浪的诗人
+> 调配他最擅长的
+> 暮光红色的酒
+> 收留失群的精灵
+> 在深夜
+> 唱出回家的忧伤
+>
+> 曾经我有世界
+> 现在我有这家店
+>
+> 深巷的店
+> 一定是吉普赛人的风格
+> 希望云游的客人
+> 将它的名字
+> 传播四方
+> 传到她的远方
+>
+> 我心爱的姑娘
+> 只有她
+> 知道那些舶来文字背后
+> 悄悄地过往
+
+
+
+
+
diff --git a/content/文学/poem/2022.md b/content/文学/poem/2022.md
new file mode 100644
index 000000000..05ec60ab6
--- /dev/null
+++ b/content/文学/poem/2022.md
@@ -0,0 +1,72 @@
+---
+title: Poem in 2022
+tags:
+- 文学
+- poem
+---
+
+
+
+> [!quote]
+> 猫猫喜欢老鼠
+>
+> 猫猫喜欢老鼠
+> 老鼠也喜欢猫猫
+> 但老鼠对未来充满担忧
+> 就算猫猫的保证也没用
+> 后来老鼠不辞而别
+> 猫猫哭的很大声
+> 猫猫脱下面具
+> 它只是个胆小的老鼠
+>
+
+---
+
+> [!quote]
+> 小赵不会断舍离
+>
+> 小赵不会断舍离
+> 他不懂
+> 为什么对面可以走的这么干净
+> 他还傻傻顶着情头
+> 发霉
+>
+
+---
+
+> [!quote]
+> 悲伤
+>
+> 悲伤或许是个好东西
+> 它给予诗人灵感
+> 赋予深度
+> 但我不想要
+> 我只想要
+> 她的回头
+
+---
+
+ > [!quote]
+> 她要下船
+>
+> 到了她的岛
+> 请不要流泪挽留
+> 笑着告别 .
+> .. ...
+>
+> 船开了
+> 风中有泪滴 ... ...
+>
+> 你没走,因为没有下座岛的指针
+
+
diff --git a/content/文学/poem/2023.md b/content/文学/poem/2023.md
new file mode 100644
index 000000000..093d2aaa0
--- /dev/null
+++ b/content/文学/poem/2023.md
@@ -0,0 +1,62 @@
+---
+title: Poem in 2023
+tags:
+- poem
+---
+
+
+
+> [!quote]
+> 清单
+>
+> 他有一个清单
+> 没那么清楚,也没那么模糊
+> 清单都打上了勾
+> 那就是表白地时候
+> 他想
+>
+> 动物园,
+> 海洋馆,
+> 电影院,
+> ... ...
+>
+> 清单的初稿一个一个被勾上
+> 只剩下表带的选项
+> 日子还长
+> 他想
+> 悄悄在心里把清单又延长
+
+
+
+---
+
+
+ > [!quote]
+> 空气
+>
+> 空气有味道,也有形状
+> 我看过
+>
+> 尴尬的空气
+> 冷漠的空气
+> 暧昧的空气
+> ... ...
+>
+> 其实我会,你也会
+> 阅读空气
+>
+> 只是有时,
+> 装作不会😉
+
+
+
\ No newline at end of file
diff --git a/content/文学/poem/Poem_by_me.md b/content/文学/poem/Poem_by_me.md
new file mode 100644
index 000000000..ec2e870ea
--- /dev/null
+++ b/content/文学/poem/Poem_by_me.md
@@ -0,0 +1,13 @@
+---
+title: My Poem
+tags:
+- 文学
+- poem
+---
+
+* [2018](文学/poem/2018.md)
+* [2022](文学/poem/2022.md)
+* [2023](文学/poem/2023.md)
+
+
+
diff --git a/content/文学/poem/attachments/050be4498ef68507f851d3c8faa3751.jpg b/content/文学/poem/attachments/050be4498ef68507f851d3c8faa3751.jpg
new file mode 100644
index 000000000..787ffda70
Binary files /dev/null and b/content/文学/poem/attachments/050be4498ef68507f851d3c8faa3751.jpg differ
diff --git a/content/文学/poem/attachments/843fe68324e3795eab897988998a553.jpg b/content/文学/poem/attachments/843fe68324e3795eab897988998a553.jpg
new file mode 100644
index 000000000..eea2c12bd
Binary files /dev/null and b/content/文学/poem/attachments/843fe68324e3795eab897988998a553.jpg differ
diff --git a/content/文学/poem/attachments/961eb44f141fa0d8e7598e910110d1c.jpg b/content/文学/poem/attachments/961eb44f141fa0d8e7598e910110d1c.jpg
new file mode 100644
index 000000000..0019db4f0
Binary files /dev/null and b/content/文学/poem/attachments/961eb44f141fa0d8e7598e910110d1c.jpg differ
diff --git a/content/文学/poem/attachments/QQ图片20230612132828.jpg b/content/文学/poem/attachments/QQ图片20230612132828.jpg
new file mode 100644
index 000000000..d3b199171
Binary files /dev/null and b/content/文学/poem/attachments/QQ图片20230612132828.jpg differ
diff --git a/content/文学/poem/attachments/e965bfcd2a8870adb80087744920341.jpg b/content/文学/poem/attachments/e965bfcd2a8870adb80087744920341.jpg
new file mode 100644
index 000000000..70c9885e5
Binary files /dev/null and b/content/文学/poem/attachments/e965bfcd2a8870adb80087744920341.jpg differ
diff --git a/content/文学/句子/Comments.md b/content/文学/句子/Comments.md
new file mode 100644
index 000000000..0a3fa9c28
--- /dev/null
+++ b/content/文学/句子/Comments.md
@@ -0,0 +1,111 @@
+---
+title: 🥐Comments
+tags:
+- 文学
+- 摘抄
+- commets
+---
+
+
+
+> [!quote]
+> From comments
+>
+> 被一些影评人的高度评价给诈骗到了;看来不同人对浪漫的定义非常不同,对于有些人来说“宇宙”“存在”等词以及语焉不详的现代诗歌排列组合在一起即可触发内心浪漫情结,就跟大学生会用夏天、自由、苏打、快乐with黄油相机滤镜加字加字照片来营造自觉出众的氛围感朋友圈一样。女儿的线也太刻奇,套了一个寻找外星人的噱头、还有伪纪录片的形式,镜头的设计还有手持的感觉在大荧幕上显得非常粗糙,之前很喜欢导演那个《法制未来时》的短片,结果电影有种加长版视频的感觉,还是感觉撑不起来啊... ...
+
+
+---
+
+
+ > [!quote]
+> From CC98, credits to someone
+>
+>
+>感觉你女朋友是那种 初中喜欢混混 高中喜欢体育生 军训喜欢教官 大学喜欢rapper 打工爱上领导 理发爱上托尼 看病爱上医生 离婚爱上律师的人
+
+
+---
+
+
+
+> [!quote]
+> From a video talking about 坂本龙一 by [HOPICO](https://www.bilibili.com/video/BV1pa4y1T7v2/?spm_id_from=333.1007.top_right_bar_window_history.content.click&vd_source=c47136abc78922800b17d6ce79d6e19f)
+>
+> 1. 脱下合成器的修饰之后,这首作品变成了一个最赤裸的样子。我们听过印象深刻的钢琴曲,我们热爱他们,我们在形容它们的时候,多数是“热情”、“悲伤”、“欢乐”、“雄伟”。但这首歌不一样,之于我第一次听到它的时候,我再想为什么会有一首歌,那么精确地,在开头把“安静”这两个字讲了出来,明明无声的真空才是最安静,可这几个音符勾勒出来的安静,就是胜过了无声的真空。我想,那是因为我们在这几个音符背后,仿佛能够看到一个作曲家,找到他的钢琴,好像全世界只剩下他们的样子。
+>
+> 要说幸运的是,我亲眼见过教授带着管弦乐团的全编制演奏这首作品。虽然我内心一直觉得,这是一首寂寞的作品,我心中它最好的样子,就是保持在三人编制以下,但是我清楚记得,那天在现场听到最后一个段落时,我眼泪止不住地往下流,我还记得我当时心里想的是:“md,这么多年了,终于听到了”
+>
+> 就像教授在后来采访里说的,它可能不是首好的电影配乐,因为它不需要画面就已经自成一体了。可是重新听到它地时候,仍然意气风发。
+>
+>
+>
+> 3. 在这之后,教授有陆续发行自己钢琴演奏版本的「A Flower is Not A Flower」。如果说文金龙版本的是线状绵延的凄凉,那在教授钢琴的版本里,我们听到的是点状的,在夜里,一边开,一边败的花的失落之美。
+>
+> 在教授这个阶段的很多作品里,我们都能找到类似的气质。或者说,教授本身就很擅长用琴键去勾勒这种气质。如果允许我用很自私的感受去总结的话,我想对于我而言就是,在这样的作品里,*哪怕是简单排列的单音,我们也能听到一边生长,一边流失*。
+>
+>
+>
+> 3. 在这之后,教授有陆续发行自己钢琴演奏版本的「A Flower is Not A Flower」。如果说文金龙版本的是线状绵延的凄凉,那在教授钢琴的版本里,我们听到的是点状的,在夜里,一边开,一边败的花的失落之美。
+>
+> 在教授这个阶段的很多作品里,我们都能找到类似的气质。或者说,教授本身就很擅长用琴键去勾勒这种气质。如果允许我用很自私的感受去总结的话,我想对于我而言就是,在这样的作品里,*哪怕是简单排列的单音,我们也能听到一边生长,一边流失*。
+>
+>