mirror of
https://github.com/jackyzha0/quartz.git
synced 2025-12-30 08:14:05 -06:00
Add TODOs and update content
This commit is contained in:
parent
4d4ea1ef91
commit
9b8f66da58
10
README.md
10
README.md
@ -18,3 +18,13 @@ Quartz v4 features a from-the-ground rewrite focusing on end-user extensibility
|
|||||||
<img src="https://cdn.jsdelivr.net/gh/jackyzha0/jackyzha0/sponsorkit/sponsors.svg" />
|
<img src="https://cdn.jsdelivr.net/gh/jackyzha0/jackyzha0/sponsorkit/sponsors.svg" />
|
||||||
</a>
|
</a>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
|
||||||
|
# TODOS
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm i
|
||||||
|
#npx quartz create
|
||||||
|
# npx quartz build --serve # local serve
|
||||||
|
npx quartz sync
|
||||||
|
```
|
||||||
@ -1,6 +1,7 @@
|
|||||||
|
|
||||||
## # What are the 4 Vs of Big Data?
|
## # What are the 4 Vs of Big Data?
|
||||||
There are generally four characteristics that must be part of a dataset to qualify it as big data—volume, velocity, variety and veracity [link](https://bernardmarr.com/what-are-the-4-vs-of-big-data/#:~:text=There%20are%20generally%20four%20characteristics,%2C%20velocity%2C%20variety%20and%20veracity.)
|
There are generally four characteristics that must be part of a dataset to qualify it as big data—volume, velocity, variety and veracity [link](https://bernardmarr.com/what-are-the-4-vs-of-big-data/#:~:text=There%20are%20generally%20four%20characteristics,%2C%20velocity%2C%20variety%20and%20veracity.)
|
||||||
|
#etl
|
||||||
|
|
||||||
### What is ETL
|
### What is ETL
|
||||||
|
|
||||||
@ -10,7 +11,7 @@ ETL provides the foundation for data analytics and machine learning workstreams.
|
|||||||
- Cleanse the data to improve data quality and establish consistency
|
- Cleanse the data to improve data quality and establish consistency
|
||||||
- Load data into a target database
|
- Load data into a target database
|
||||||
|
|
||||||
|
#apachebeam
|
||||||
### Apache Beam
|
### Apache Beam
|
||||||
Apache Beam is an open-source, unified programming model and set of tools for building batch and streaming data processing pipelines. It provides a way to express data processing pipelines that can run on various distributed processing backends, such as Apache Spark, Apache Flink, Google Cloud Dataflow, and others. Apache Beam offers a high-level API that abstracts away the complexities of distributed data processing and allows developers to write pipeline code in a language-agnostic manner.
|
Apache Beam is an open-source, unified programming model and set of tools for building batch and streaming data processing pipelines. It provides a way to express data processing pipelines that can run on various distributed processing backends, such as Apache Spark, Apache Flink, Google Cloud Dataflow, and others. Apache Beam offers a high-level API that abstracts away the complexities of distributed data processing and allows developers to write pipeline code in a language-agnostic manner.
|
||||||
|
|
||||||
|
|||||||
0
content/Devops&DevSecOps/Index.md
Normal file
0
content/Devops&DevSecOps/Index.md
Normal file
2
content/SoftwareEnginnering/Index.md
Normal file
2
content/SoftwareEnginnering/Index.md
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
# Software Engineering
|
||||||
|
|
||||||
@ -1,11 +0,0 @@
|
|||||||
# Software Enginnering
|
|
||||||
adsmsadsa
|
|
||||||
d
|
|
||||||
asd
|
|
||||||
as
|
|
||||||
d
|
|
||||||
as
|
|
||||||
da
|
|
||||||
sd
|
|
||||||
as
|
|
||||||
d
|
|
||||||
Loading…
Reference in New Issue
Block a user