remove obsidian vault

This commit is contained in:
ckrivacic01 2024-06-06 15:05:28 -04:00
parent cf04e79db3
commit 0ebf4c3794
9 changed files with 0 additions and 434 deletions

View File

@ -1 +0,0 @@
The event archive volume can cause the server to not start up if the volume is deleted.

View File

@ -1,5 +0,0 @@
### Default data insert
- permissions are not correct. admin user does not have all permissions requires manually adding them
### Multiserver permissions
permissions are only applied per server

View File

@ -1,143 +0,0 @@
To stream images you will need to use the protobuf specs to generate a javascript implementation. The websocket for alarms can also be used to get live video from a camera.
## Starting a video subscription (mjpeg)
To start a video subscription follow the steps bellow. This will get video in mjpeg format that will be displayable as a jpeg in the browser.
1. Make a `PUT` request to `http://{{server}}:{{port}}/api/v1/video/{websocket-session-id}`
```json
{
"category": "video",
"spec": {
"videoStreamKey": {
"cameraNumber": <replace-with-camera-number>,
"scaled": true
},
"codec": "mjpeg"
}
}
```
### Ending the subscription
To end the subscription make a `DELETE` request with the same body used to start the subscription `http://{{server}}:{{port}}/api/v1/video/{websocket-session-id}`
```json
{
"category": "video",
"spec": {
"videoStreamKey": {
"cameraNumber": <replace-with-camera-number>,
"scaled": true
},
"codec": "mjpeg"
}
}
```
## Reading the binary messages
install `@protobuf-ts/plugin`
Generate the protobuf javascript classes
```
SRC_DIR="<pathToProtofiles>"
npx protoc \
--ts_out lib/generated/proto \
--ts_opt long_type_string \
--proto_path $SRC_DIR \
$SRC_DIR/*.proto
```
The javascript will be output to lib/generated/proto
in the websocket handleing code parse the binary messages using the generated javascript.
```javascript
if (msg.data instanceof Blob) {
// receiving binary
this.incomingBinary(msg.data);
} else if (typeof Buffer !== 'undefined' && msg.data instanceof Buffer) {
// receiving binary
this.incomingBuffer(msg.data);
} else {
// receiving text
this.incomingText(msg.data);
}
```
`incomingBinary` and `incomingText` are defined like this
```javascript
private incomingBinary(blob: Blob) {
blob.arrayBuffer().then(ab => {
const uint = new Uint8Array(ab);
const streamMessage = StreamMessageToClient.fromBinary(uint);
this.incomingStreamMessage(streamMessage);
});
}
private incomingBuffer(data: Buffer) {
const uint = new Uint8Array(data); // Adjust for Node.js; data is already a Buffer
const streamMessage = StreamMessageToClient.fromBinary(uint);
this.incomingStreamMessage(streamMessage);
}
```
`StreamMessageToClient` is generated from the protobuf specs the following method is an example of how to parse the contents of the message
```javascript
private incomingStreamMessage(msg: StreamMessageToClient) {
if (msg.data.oneofKind === "videoMessage") {
this.incomingVideo(msg.data.videoMessage);
} else {
console.debug("Type " + msg.data.oneofKind + " not implemented, yet");
}
}
private incomingVideo(msg: VideoMessage) {
if (msg.videoStreamKey) {
const videoStreamKey: VideoStreamKey = {
cameraNumber: msg.videoStreamKey.cameraNumber,
scaled: msg.videoStreamKey.scaled,
};
// msg.streamMetrics;
if (msg.frame.oneofKind === "h264VideoMessage") {
console.debug('received h264VideoMessage. length=%d', msg.frame.h264VideoMessage.nalUnit.length);
const videoMessage: VideoMessageH264 = {
videoStreamKey: videoStreamKey,
iframe: msg.frame.h264VideoMessage.iframe,
isParameterSet: msg.frame.h264VideoMessage.isParameterSet,
nalUnit: msg.frame.h264VideoMessage.nalUnit,
}
this.videoH264.next(videoMessage);
} else if (msg.frame.oneofKind === "mjpegImage") {
// console.debug('received mjpegImage. length=%d', msg.frame.mjpegImage.image.length);
const videoMessage: VideoMessageMjpeg = {
videoStreamKey: videoStreamKey,
image: msg.frame.mjpegImage.image,
}
this.videoMjpeg.next(videoMessage);
} else {
console.debug("Unsupported video codec: " + msg.frame.oneofKind);
}
} else {
console.error("Missing expected 'videoStreamKey' on VideoMessage");
}
}
```
The source of this code can be found at https://github.com/Acuity-vct/vcscollab/tree/main/node/vcs-client-ts-api

View File

@ -1,7 +0,0 @@
postgREST works directly through the porxy. Login is required through
`http://localhost:3090/api/v1/authenticate` the jwt from the gateway will contain all servers jwts the gateway will use the correct one when going to postGREST
`http://localhost:3090/Mac-Pro-dev/postgrest/camera_vw`
tested through the nginx proxy with `/gateway` in the path the following works `http://localhost:1080/gateway/Mac-Pro-dev/postgrest/` when using the token the gateway provided.
In this case the request is routed to the gateway and the gateway will route to the instance of postgrest on the server `Mac-Pro-dev`.

View File

@ -1,2 +0,0 @@
install `yarn` and `just`
`just _run-example-overlay-react`

View File

@ -1,26 +0,0 @@
```java
if (continuousButton.isSelected()) {
newCameraData.setLegacyMotion(false);
newCameraData.setAdvancedAnalytics(false);
newCameraData.setMotionAlarmEnable(false);
newCameraData.setPauseRecording(false);
// if(loadedAdvancedSchedule)newCameraData.setChanged(true);
} else if (motionButton.isSelected()) {
newCameraData.setLegacyMotion(true);
newCameraData.setAdvancedAnalytics(false);
newCameraData.setMotionAlarmEnable(alarmSelected);
newCameraData.setPauseRecording(false);
} else if (analyticsButton.isSelected()) {
newCameraData.setAdvancedAnalytics(true);
newCameraData.setMotionAlarmEnable(alarmSelected);
newCameraData.setLegacyMotion(false);
newCameraData.setPauseRecording(false);
} else if (liveOnlyButton.isSelected()) {
newCameraData.setLegacyMotion(false);
newCameraData.setAdvancedAnalytics(false);
newCameraData.setMotionAlarmEnable(false);
newCameraData.setPauseRecording(true);
// if(loadedAdvancedSchedule)newCameraData.setChanged(true);
}
```

View File

@ -1,65 +0,0 @@
To update individual camera settings use `PATCH` `/api/v1/admin/camera/save/{cameraNumber}`
```
curl --location --request PATCH 'http://localhost:80/api/v1/admin/camera/save/2' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJraWQiOiJWQ1MiLCJhbGciOiJFUzI1NiJ9.eyJpc3MiOiJWQ1MiLCJzdWIiOiJhZG1pbiIsImV4cCI6MTc0ODcxNjYwMywiaWF0IjoxNzE3MTgwNjAzLCJzY29wZSI6ImFkbWluIn0.l880RzbHH_mxpOjRKp-11ktInpTbldG5Ok4mn4CROE6jY5TiCO6M3Z1B1w-b72Xav0jBGgIb156pvEL6euNCmw' \
--data '[
{"op":"replace","path":"/recordingOption","value":"Live"},
{"op":"replace","path":"/enable","value":"true"},
{"op": "replace", "path": "/scaledLiveFramerate", "value":"3"}
]'
```
## Json Patch
- `op` is the operation field in this case `replace` should be used
- `path` is the property on the object to change `/recordingOption` will apply to the recording option property
- `value` is the value to set the property to.
the body contains json patch to replace the values in the `cameraSaveDTO`
The path can have any field that is inside `cameraSaveDTO` the following fields can be specified in the path field.
```json
"id": 0,
"cameraDescription": "string",
"modelName": "string",
"group": "string",
"rotated": 0,
"codec": 0,
"scaledLiveFramerate": 0,
"ptzDriver": 0,
"activeIOPort": 0,
"server": 0,
"groupNbr": 0,
"activeIOText": "string",
"motionAlarmScheme": "string",
"intercomIP": "string",
"intercomName": "string",
"monitoringIndication": true,
"receiveAudio": true,
"motionDetection": true,
"motionAlarmEnable": true,
"snapShot": true,
"pause": true,
"enabled": true,
"scaledStream": true,
"unavailable": true,
"requiresMask": true,
"recordingOption": "Continuous",
"scheduled": true,
"analytic": true,
"down": true,
"ptz": true
```
recordingOption can have the following values. To get this list as json use `GET` `/api/v1/admin/camera/recording-options`
```json
"Continuous",
"LegacyMotion",
"AdvancedAnalytics",
"Live",
"API"
```

View File

@ -1,24 +0,0 @@
-----BEGIN PGP MESSAGE-----
Version: BCPG v1.70
owJ4nJvAy8zAxbh4UYDvYb74e4ynD6xNEs/JTE7NK051y8xJ9UvMTdX1TM/LL0pN
SYs3Oq+sq6sAlVZIA8pzKYeUpip4leYpGJgoGBpaGVlaGVoquLqEKBgZGJlwJeYk
FuX6ZOZmltgaGRhw5ReUZObnWdrCWOZwlgWcZQpnmQFZxalFZalFni62IanFJbpl
ycVcuYnJRrYg0hBMGtiaGsVYmZrEWBkYxFilmsVYJRsDaUuuxIICoEMTQQbZhjkH
c2XmFZck5uS4JJak2hqY6RuY6INdmJFYlFKeWJQKtAJknAmYNLaFOsEY7hgTICs5
P7cgMa/SNjG5NLOkkiu1oiCzKBVioLm+oSXIQHOockNbIyjLCKQRGIpFiejhYGBr
yNXJpMXCwMjFIMLKBApeXhlHsNlhySXeqZUMXJwCsJh5lsH/v+RyYuLi9D2zMveb
2fJPW2rY8/u8+6R5kowzX5lL7HZbk6lWZaqY0zr96YT+7QFle0qrFERVVRt8LF08
J3BFVy2LKg7VqrGb178m+DTPq5dT1lczrImOOvTx0X1nvdstK35xLTgXrJl3VPL5
m7JcvuPfVlgvTt53UWdWnIyhckvVrw+mJjuf+Qkumq6s8LLuuKVj4d769Plc2oaz
eftWm3e+vt+kdkDBuo1DdI5bskHF7fvG1+4vma7WIiOTqbm0c8Lno2aO/J+f7Oj9
eHvVLb1aj2ynfT57f3uLfRd7GCG1lb1TTGlz+LzUE+8dWG62GjE9uRrQelvmj4D+
eTPbRsltyzTvCZ4T27NrXcT7xuU5787Z1WlfeftltfKazNIYr2b5mAS9VwdazCZv
qe3+H1Oa8CW92T7pzJMTZvemt5qvVm83Db+y4lOEUpfJs3PFC2/qOdxsWSC5eqH2
Hy0lcZHj58q6frOmZrQKhzuqh74Jzt7nmnlftJLTujlxO9vemqiTSj9muuzO4I47
f6DhWCBb3/f3ho3yTGbb52nMSNU51P2QdX3vlY8L73z+kJge32yyX9XBplr++xnH
ednVnQl5s6NXC/E61W/o+XT27keL7E9TWxr8C/y2LCxUec1Q9WrGlfoGtrIvKguj
mCJLD67Irs9XObGC0+sLq6mU4CkfhaMFx8XkjRcfSNHvXvf5ctD8i7eS/Q8/4QYA
ZC+BLA==
=TfNw
-----END PGP MESSAGE-----

View File

@ -1,161 +0,0 @@
server 192.168.2.242
vcs:Asdf370)
*Required*
- vcs already installed (can be existing or clean install) for an existing installation the migration scripts will need to be ran to insert the data in sqlite into the postgres database.
To start the process of configuring postgres and setting up docker compose stop the vcs service using `systemctl stop vcs`
Server ssl needs to be disabled. Nginx will handle ssl.
disable ssl in `/cfg/settings.xml` set `WebServerSSL=false`
```xml
<General 
.
.
.
WebServerPort="80"
WebServerSSLPort="443"
WebServerSSL="false"
```
## Configure env.sh to use postgres
add the `-Ddatabase.type=postgres` to the `OTHER_JVM_OPTIONS` variable to use postgres as a database.
> `OTHER_JVM_OPTIONS` is not an environment variable. It is only read by `startvcs.sh`
also configure the following connection parameters.
>The `export` denotes that it is an environment variable
```shell
OTHER_JVM_OPTS="-Ddatabase.type=postgres <other-vm-args>"
#  datasource config postgres
export DATABASE_USER=vcs
export DATABASE_PASSWORD=vcs
export DATABASE_URL=jdbc:postgresql://localhost:5445/postgres
```
> NOTE: The `DATABASE_URL` should point to the postgres server. If Configuring multiple servers there will be a single database and the `localhost` should be replaced with the primary postgres servers ip when the postgres instance is not on the same machine as vcs.
When not root add the user to the docker group `sudo usermod -aG docker $USER`
# Running docker compose
Configure the `.env` `.env.local` and `.proxyGatewayConfig.json`. Adjust the server ip in each to point to the correct server.
## Configuration for compose
### Initialize docker and docker compsoe
The upgrade pack will contain a `docker.zip` and a `initialize_docker.sh` copy these two files to `/usr/vcs`
```bash
cd /usr/vcs
./initialize_docker.sh
```
The initialize script will install docker and the docker compose plugin. After Running the script there should be a new folder `/usr/vcs/compose-cfg` that contains the configuration for the docker compose environment. Configure the files as specified bellow
Configure the `.env` , `.env.local` and `proxyGatewayConfig.json`
The initialize_docker.sh script will copy the configuration files to /usr/vcs/compose-cfg edit the configuration files as specified in the next section.
### .env configuration
Change `VCS_SERVER_HTTP_URL=http://` to the vcs servers ip-address this should be the ip of the machine and also specify the port that vcs is listening on *(this should be an http url*).
> NOTE: VCS should be running on a higher port than 80 If vcs is not on 80 make sure to reflect that in this url. When running vcs on a higher port the existing swing client will talk directly to vcs so make sure to open that port in the firewall.
Set the `POSTGREST_VOLUME_PGDATA` and `POSTGRES_VOLUME_ARCHIVE` to a location where the database should store files. In this example it is in `/mnt/video00` this might need to change on production servers. The video drive or the root partition could be used.
```shell
APP_USER="vcs"
APP_PASSWORD="vcs"
DB_HOST="postgres"
DB_PORT="5432"
DB_NAME="postgres"
SSL_CERTS="./nginx_ssl_certs"
#Location of docker container configuration for multiserver-proxy and frontend
CONFIG_DIR="/usr/vcs/compose-cfg"
POSTGRES_VOLUME_PGDATA="/mnt/video00/dockerdata/postgres/pgdata"
POSTGRES_VOLUME_CONFIG="./postgres/config"
POSTGRES_VOLUME_ARCHIVE="/mnt/video00/dockerdata/postgres/archive"
VCS_SERVER_HTTP_URL="http://192.168.3.227:80" # cannot be localhost
```
### new-ui configuration (.env.local)
The new ui configuration is located in `.env.local`
Change the `NEXT_PUBLIC_API_SERVER` to the ip address of the vcs server.
`NEXT_PUBLIC_API_HOST` should be set to `http` if not going to use https for the new ui otherwise set this to `https`. `NEXT_PUBLIC_WS_SCHEMA` should be set to `wss` if using https or `ws` if using http. `NEXT_PUBLIC_API_PORT` should be set to the http or https port that nginx is listening on *(not the vcs http/https port)*.
`NEXT_PUBLIC_API_SUB_PATH` should be left blank if configuring for single server otherwise it will need to be `/gateway` to have it route requests to the mutiserver gateway. For more information on the multi server setup see //TODO:
Replace `<server-ip>` inside `NEXTAUTH_URL` with the ip address of the server. Also replace the `<nginxprt-1080>` with the same port used for the `NEXT_PUBLIC_API_PORT`
```shell
NEXT_PUBLIC_API_SERVER = <server-ip> #public ip used outside of the container to access the proxyGateway or vcs server (cannot be localhost as it will attempt to authenticate inside the container at vcs /api/v1/authenticate)
NEXT_PUBLIC_API_HOST = http
NEXT_PUBLIC_API_PORT = 1080
NEXT_PUBLIC_WS_SCHEMA = ws
NEXT_PUBLIC_API_SUB_PATH = ""
NEXT_PUBLIC_PROJECT = "Acuity-VCT" # "Acuity-VCT" | "Art Sentry",
NEXTAUTH_SECRET = my_ultra_secure_nextauth_secret
NEXTAUTH_URL = http://<server-ip>:<nginxport-1080>/artsentry/api/auth
NEXTAUTH_URL_INTERNAL = http://localhost:3000
```
### Gateway server config
The gateway server configuration is located inside `proxyGatewayConfig.json` this file contains a list of servers and their server-ids.
This is an example of configuring the gateway for two servers.
The only thing that should change in this file is the `servers` array. For a single server setup just define the ip of the single server in this json array. When the new ui is configured to go to `/gateway` this configuration will need to be correct in order to use multiserver in the new ui.
```json
{
"servers": [
{
"ip": "192.168.3.227",
"id": "VCS-DEV-INT-1"
},
{
"ip": "192.168.2.31",
"id": "290WPD2"
}
],
"secretKey": "secretKey",
"listeningPort": 3090,
"allowedOrigins": [
"*"
]
}
```
### Start the docker compose
### Login to github docker repo
the server will need to authenticate with the github package repo. Generate a jwt token that has access to artsentry and has package read access.
````bash
docker login ghcr.io -u USERNAME
````
change directories to `/usr/vcs/docker` make sure the .env file is located in `/usr/vcs/compose-cfg`
The following command will startup all the services that vcs requires
```bash
docker compose --env-file ../compose-cfg/.env up -d
```
to stop all the docker images run
```
docker compose --env-file ../compose-cfg/.env stop
```
since the `.env` file is not located next to the compose file it needs to be specified to each docker compose command.
to remove all the docker images
```bash
docker compose --env-file ../compose-cfg/.env down
```
>NOTE: Make sure the vcs server is not running until all the services are running and the database is properly configured in the vcs `env.sh` file
After the docker compose services are running start the vcs server
```bash
systemctl start vcs
```