fix: pages
Some checks failed
continuous-integration/drone/push Build is failing

This commit is contained in:
2025-07-31 11:14:46 +02:00
parent 4bb6b0228a
commit 6266d97fc1
13 changed files with 7 additions and 11851 deletions

View File

@@ -60,7 +60,7 @@ The backend services already exist, and looks like this:
### Backend
![backend services](assets/2023-09-09-backend-services.png)
![backend services](/assets/2023-09-09-backend-services.png)
What happens here is that we have variety of business services, which are used
and serviced by either a User, or an engineer specifically. All the models are
@@ -73,7 +73,7 @@ These domain events are what ends up in the data platform.
### Internet of Things
![iot](assets/2023-09-09-iot-service.png)
![iot](/assets/2023-09-09-iot-service.png)
Like the backend services, we have IoT services for various things, such as
storing measured data, or controllers for doing various things. Most of these
@@ -148,7 +148,7 @@ An event will flow through a NATS global event consumer, which will then get a
schema applied while being transported to a data fusion pipeline (Apache spark
alternative)
![data ingest](assets/2023-09-09-data-ingest.png)
![data ingest](/assets/2023-09-09-data-ingest.png)
This is not very different than normal data ingestion pipelines, though without
Kafka and Spark. If a schema application fails, or cannot be found, it will be
@@ -169,7 +169,7 @@ date base models, and integrated transformations by the users. It includes
support for compaction, versioning, scale out queries, transformations and much
more.
![data ingest](assets/2023-09-09-data-lakehouse.png)
![data ingest](/assets/2023-09-09-data-lakehouse.png)
In here we see the data entering the system from the NATS listener, it will pull
out data from the nats stream and ingest them into the datafusion pipeline
@@ -190,7 +190,7 @@ Trino (distributed sql).
I will have a need for realtime aggregations, as such a bunch of allowed
transformations should end up in clickhouse for rapid querying.
![real time analytics](assets/2023-09-09-data-real-time.png)
![real time analytics](/assets/2023-09-09-data-real-time.png)
Much of the same architecture from before is used (See datalake section),
however, we also put data in `clickhouse`, this is to enable rapid querying on a